
Powerful, integrated real-world data and AI-driven solutions to transform how insights are generated and accelerate therapeutic innovations to patients.

Powerful, integrated real-world data and AI-driven solutions to transform how insights are generated and accelerate therapeutic innovations to patients.
Focus: Real-world oncology data and AI-driven SaaS for life sciences
Headquarters: Cambridge/Boston, Massachusetts, USA
Founded: 2018
Employees: 871
Known funding: Raised >$230M; Series C led by Sixth Street (Mar 2022), post-money valuation reported $1.9B
Real-world data (RWD) aggregation and AI solutions for oncology and life sciences.
2018
Healthcare; Life Sciences; AI
Post-investment valuation reported at $1.9 billion
“Backed by strategic and institutional investors including SAI Group (SymphonyAI), Sixth Street, Declaration Partners, Maverick Ventures, and AllianceBernstein private credit/growth”
Job Requirements Looking for energetic, self-motivated and exceptional Data Engineer to work on extraordinary enterprise products based on AI and Big Data engineering. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers, Integration Specialists and UX developers.
Work Experience Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Experience supporting and working with cross-functional teams in a dynamic environment.
Role 6+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Information Systems or another quantitative field. Should have experience using the following software/tools:
Experience with relational SQL and NoSQL databases, including Postgres and RDS- MSSQL.
Experience in PySpark programming.