Reported total funding (snapshot): $872,150,000 (reported)
Related Companies
Company
HQ
Industry
Total Funding
Lovable
🇸🇪SE
Data and AnalyticsDeepTechInformation TechnologySoftware
-
Tabs
🇺🇸US
Data and AnalyticsDeepTechFinanceInformation TechnologyProfessional ServicesSoftware
$92M
xAI
🌍Remote
—
$37B
DeepL
🇩🇪DE
Sales and Marketing
$413M
DevRev
🇺🇸US
Administrative ServicesData and AnalyticsDeepTechInformation TechnologyProfessional ServicesSales and MarketingSoftware
$151M
Company Overview
Problem Domain
Developer tooling / cloud IDEs and AI-assisted software development
Founded
2016
Industry
Software Development
Funding Track Record
Series B extension- 2023-04-27
$97.4M
Reported Series B extension that brought Replit's total raised at the time to over $200M.
Series C- 2025-09-10
$250M
Round reported to value the company at approximately $3.0B with participation from Google AI Futures Fund, Amex Ventures, and others.
Investor Signal
“Includes participation from prominent venture investors such as Prysm Capital, Andreessen Horowitz, a16z, Y Combinator, Amex Ventures, Coatue, and others”
Founders
What we do
Join the Team
Security Engineer
On-SiteSan Francisco Bay Area, US
On-Site • San Francisco Bay Area, US
Who you are
4+ years of experience in security engineering, anti-abuse, trust & safety, or fraud detection
Strong programming skills in Python and/or TypeScript for building detection systems and automation
Experience with SQL and data analysis at scale (BigQuery, Snowflake, or similar)
Experience building or fine-tuning ML/LLM-based classifiers for security or abuse detection
Familiarity with prompt injection, jailbreaking, and other LLM-specific attack vectors
Ability to investigate complex abuse patterns and translate findings into automated defenses
Familiarity with common attack patterns: phishing infrastructure, account takeover, credential stuffing, resource abuse
Clear communication skills for working across Security, Support, Legal, and Engineering teams
Experience at a platform company dealing with user-generated content or compute abuse (hosting providers, cloud platforms, developer tools)
Background in fraud detection, payment abuse, or financial crime
Familiarity with device fingerprinting, IP reputation, and email validation services
Experience with CI/CD security tooling (SAST, SCA, Dependabot, Snyk)
Knowledge of container security, Linux internals, or cloud infrastructure (GCP preferred)
Prior work with abuse reporting pipelines, trust & safety tooling, or content moderation systems
What the job involves
Benefits
Remote-First and Autonomous Working Environment
Flexible Work Hours
Equity
Home Office Set-Up Stipend
Health, Dental, Vision, and Life Insurance
Short Term and Long Term Disability
Monthly Expenses Stipend
Commuter Benefits
Parental and Baby Bonding Leave
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
Product Designer
InternshipHamburg, DE
Internship • Hamburg, DE
Frontend Developer
Part-timeHamburg, DE
Part-time • Hamburg, DE
Frontend Developer
Full-timeRotterdam, NL
Full-time • Rotterdam, NL
Mobile Developer
ContractUtrecht, NL
Contract • Utrecht, NL
Technical Writer
InternshipCambridge, GB
Internship • Cambridge, GB
Backend Developer
ContractTel Aviv
Contract • Tel Aviv
The Anti-Abuse team is the front line defending Replit's platform from exploitation
We detect and shut down phishing deployments, prevent cryptomining on free-tier infrastructure, stop LLM token farming, and keep bad actors from weaponizing the platform against our users
This is adversarial work: attackers adapt constantly, and we build the detection systems, heuristics, and automated responses that stay ahead of them
What makes this role unique is the AI-native nature of Replit's platform
You'll work on problems that barely exist elsewhere: building guardrails for AI-generated code, detecting prompt injection attacks at scale, and using LLMs as a defensive tool against abuse
If you want hands-on experience applying AI to security problems, this is one of the few places you can do it in production with real attackers
You'll own problems end-to-end, from identifying emerging abuse patterns to shipping the systems that stop them at scale
Design and implement LLM guardrails that detect abuse scenarios in AI-generated code and agent interactions
Build AI-powered detection systems that use LLMs to identify malicious patterns, classify threats, and automate response decisions
Build and operate abuse detection systems that identify phishing, cryptomining, account takeover, and financial fraud across millions of daily user actions
Design automated response mechanisms that enforce platform policies without manual intervention
Own the full abuse response lifecycle: detection, investigation, enforcement, and handling appeals alongside Support and Legal
Analyze attack patterns using BigQuery and Hex, turning investigation findings into new detection rules
Maintain and extend internal detection tools (Slurper, Netwatch) that continuously monitor user activity
Integrate and tune security scanners (SAST, SCA) in CI pipelines with tight performance SLAs
Track abuse trends, measure detection effectiveness, and adapt defenses as attack patterns evolve