Partner One Capital São Paulo, State of São Paulo, Brazil Contract 2024-06-03

This is a 100% fully remote position

We are seeking a skilled and experienced Data Engineer to join our Threat Research team. The primary responsibility of this role will be to design, develop, and maintain data pipelines for threat intelligence ingestion, validation, and export automation flows. The ideal candidate will have a strong background in data engineering, experience with data validation techniques, and proficiency in building scalable and efficient data pipelines. The candidate must have extensive experience with SQL, data orchestration tools, Python, and SEIM platforms, with a strong preference for experience in threat intelligence or other cybersecurity applications. Snowflake, AWS, Azure, or Google Bigtable experience is strongly preferred.


  • Design, develop, and maintain data pipelines for ingesting threat intelligence data from various sources into our data ecosystem.
  • Implement data validation processes to ensure data accuracy, completeness, and consistency.
  • Collaborate with threat analysts to understand data requirements and design appropriate solutions.
  • Develop automation scripts and workflows for data export processes to external systems or partners.
  • Optimize and enhance existing data pipelines for improved performance and scalability.
  • Monitor data pipelines and troubleshoot issues as they arise, ensuring continuous data availability and integrity.
  • Document technical specifications, data flows, and procedures for data pipeline maintenance and support.
  • Stay updated on emerging technologies and best practices in data engineering and incorporate them into our data ecosystem.
  • Provide technical guidance and support to other team members on data engineering best practices and methodologies.


  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • Proven experience as a Data Engineer or similar role, with a focus on data ingest, validation, and export automation.
  • Strong proficiency in Python.
  • Experience with data pipeline orchestration tools such as Apache Airflow, Apache NiFi, or similar.
  • Solid understanding of database systems (relational and NoSQL), data modeling, and SQL. Must demonstrate a strong understanding of SQL queries.
  • Familiarity with cloud platforms such as Snowflake, AWS, Azure, or Google Cloud Platform. Experience with Snowflake is preferred.
  • Experience with data validation techniques and tools for ensuring data quality.
  • Experience building and deploying images using containerization technologies such as Docker and Kubernetes.
  • Strong understanding of Apache Avro or JSON schema validation.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills, with the ability to work effectively in a team environment.