
B2B Rocket – Industry Leading AI Agents for Scalable B2B Growth
Product: AI agents for automated B2B lead generation, multi-channel outreach, and appointment setting
Data assets: Claims a contact database of 4+ billion data points and intent/visitor signals
Leadership: Noah A. Loul (CEO) with a small executive team
Headquarters / address: Lewes, Delaware (16192 Coastal Highway, Lewes, DE 19958)
Funding stage: Pre-Seed (Dec 21, 2022 per Crunchbase)
B2B sales automation and lead generation
2022
Software Development
Pre-Seed round recorded on Crunchbase with amount and investor fields obfuscated
| Company |
|---|
Please read the job description carefully and apply only if you are from Puna, Gujarat and Bengaluru, Karnataka, India.
We are seeking a skilled and experienced Data Engineer (4–6 years’ experience). If you consistently deliver quality results and meet our requirements, you will enjoy a long term, full time role and a secure position in the team.
𝗞𝗲𝘆 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀:
. 𝗗𝗲𝘀𝗶𝗴𝗻 & 𝗯𝘂𝗶𝗹𝗱 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲 𝗱𝗮𝘁𝗮 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀:
. Ingest events from product, CRM, emails and third-party APIs
. Use Python and SQL, orchestrators like Airflow/Prefect and streaming tools such as Kafka/Kinesis
. 𝗪𝗮𝗿𝗲𝗵𝗼𝘂𝘀𝗲 𝗺𝗼𝗱𝗲𝗹𝗶𝗻𝗴:
. Create star/snowflake schemas and data marts
. Leverage dbt for clean, tested and documented transformation workflows
. 𝗘𝗹𝗮𝘀𝘁𝗶𝗰𝗦𝗲𝗮𝗿𝗰𝗵 𝘀𝗲𝗮𝗿𝗰𝗵 & 𝗶𝗻𝗱𝗲𝘅𝗶𝗻𝗴:
. Build and optimize indices, mappings, analyzers, synonyms and relevance tuning (BM25)
. Develop ingestion pipelines (Logstash/Filebeat/custom solutions) and Kibana dashboards
. Enhance performance via optimized shards, refresh intervals and ILM policies
. 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 & 𝗠𝗟 𝗲𝗻𝗮𝗯𝗹𝗲𝗺𝗲𝗻𝘁:
. Deliver data products for lead scoring, attribution, LTV, cohort analytics
. Productionize ML features (feature engineering, embeddings, model outputs) for personalization and ranking
. 𝗥𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆, 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆, & 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻:
. Implement and automate testing, data quality checks, observability (SLA, alerting, lineage)
. Ensure data governance, access control, PII handling, encryption and GDPR-friendly retention policies
. Optimize performance and costs: compute/storage, partitioning, caching, query optimization
𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀:
. 4–6 years of hands-on experience in data engineering roles
. Strong expertise with Python, SQL and Node.js
. Proven experience with tools such as Airflow, Kafka/Kinesis, dbt, ElasticSearch/Kibana/Logstash, Snowflake/BigQuery, Postgres, MongoDB, Redis
. Familiarity with cloud platforms: AWS, GCP or Azure and infrastructure as code (Terraform)
. Experience with CI/CD (GitHub Actions), automated testing and monitoring
. Deep understanding of data security, compliance and best practices in handling sensitive information
. Strong analytical and problem-solving skills; able to work independently and collaboratively
𝗡𝗶𝗰𝗲 𝘁𝗼 𝗛𝗮𝘃𝗲:
. Prior work on feature stores or in ML productionization
. Experience building dashboards and working with cross-functional stakeholders
𝗪𝗼𝗿𝗸 𝗦𝗰𝗵𝗲𝗱𝘂𝗹𝗲 & 𝗖𝗼𝗺𝗽𝗲𝗻𝘀𝗮𝘁𝗶𝗼𝗻:
. 𝗪𝗼𝗿𝗸𝗶𝗻𝗴 𝗛𝗼𝘂𝗿𝘀: Indian Standard Time (IST) Or (Eastern Standard Time) EST
. 𝗕𝘂𝗱𝗴𝗲𝘁: $600 to $700
. 𝗟𝗼𝗻𝗴-𝗧𝗲𝗿𝗺 𝗢𝗽𝗽𝗼𝗿𝘁𝘂𝗻𝗶𝘁𝘆: If you consistently meet requirements and perform well, this is a full-time, long-term position