
Empowering the world to visualize ideas through pioneering research and design. Try Dream Machine for free → lumalabs.ai/dream-machine

Empowering the world to visualize ideas through pioneering research and design. Try Dream Machine for free → lumalabs.ai/dream-machine
Headquarters / domain: lumalabs.ai (Palo Alto)
Flagship product: Dream Machine — web and iOS image & video generation platform
Core models: Ray3 family (Ray3, Ray3 Modify, Ray3.14) for production-quality video
Recent funding: Announced $900M Series C led by HUMAIN
Employees (approx.): 240
Generative multimedia (image and video) creation and multimodal AI models.
Software Development
43000000
Announced $43M Series B to expand generative 3D and multimodal AI work
900000000
Announced $900M Series C with participation from AMD Ventures, Andreessen Horowitz, Amplify Partners and others; included partnership on large compute buildout
“HUMAIN-led strategic investment and participation from institutional VCs including Andreessen Horowitz, AMD Ventures, Amplify Partners, Matrix (per company reporting and coverage)”
| Company |
|---|
About Luma AI Luma’s mission is to build multimodal AGI. Through our research on video, 3D, and now multimodal models at Luma, we believe that AI needs to be jointly trained over all signal modalities – text, video, audio, images – analogous to the human brain.
To advance our mission, we build and operate the full stack end-to-end, spanning foundation models, inference systems, and products. This integrated approach powers technologies like Ray3, which is seeing rapidly growing adoption among Fortune 500 companies across media, entertainment, and advertising. Backed by a recent $900M Series C and our partnership with Humain to build a 2 GW compute supercluster (Project Halo), our models and the Dream Machine platform are now enabling creatives worldwide to tell some of the most impactful stories of our time.
This is a rare and foundational opportunity to define the future of multimodal AI. You will be at the forefront of building and training large-scale multimodal models, directly impacting how users interact with pixels. This role offers the chance to bridge cutting-edge research with magical, shipped products, working end-to-end on novel problems with no existing playbook.
What You'll Do This opportunity involves both the “science” and “engineering” parts of research, two aspects that are of equal importance.
This is a multi-stack opportunity where you will work on the intersection of modeling, data, systems, and evaluation.
Who You Are
What Sets You Apart (Bonus Points) Experience in the following around data, modeling, or evaluation:
Your application are reviewed by real people.