Data Engineer
As a Data Engineer at Nickelytics, you will be an integral part of a fast-paced adtech startup. You will serve as the primary technical expert, responsible for leading and contributing to data projects from inception to completion. This role is critical in designing, developing, and maintaining data architectures, ETL processes, and pipelines, often leveraging location intelligence data. The Engineer will collaborate with cross-functional teams, stakeholders, and data professionals to ensure the successful delivery of data projects, aligning with business objectives, efficiently deriving relevant insights, and presenting them in a consumable format for stakeholders and customers.
Key Responsibilities:
Develop a thorough understanding of the data we maintain.
Drive data warehouse design, modeling, security and ETL development.
Develop and maintain ETL routines using orchestration tools.
Expand and optimize data and data pipeline architecture.
Ingest, sanitize and aggregate data from multiple sources.
Develop algorithms, processes and features valuable to downstream processes and teams.
Collaborate with engineers to adopt best practices in data integrity, validation and documentation.
Investigate data issues and respond to client inquiries, clearly communicating any challenges.
Design and scale data infrastructure to support high-demand services and large-scale workloads.
Requirements:
4+ years of experience in data processing, pipeline development, and related domains.
Proficiency in Python and SQL.
Experience with data modeling, warehousing, streaming and building ETL/ELT pipelines in cloud environments (preferably on AWS).
Ability to visualize and explain complex data, with and without Jupyter Notebooks.
Experience with orchestration tools for ETLs.
Experience with container workflows, familiarity with Docker, Docker Compose and/or Tilt.
Experience with distributed columnar data stores (ClickHouse, Snowflake or Big Query), relational databases (PostgreSQL), NoSQL databases (Cassandra or MongoDB), data lakes in object storage (S3) and HDFS.
Experience with query engines such as Trino and/or Athena.
Change Data Capture CDC tools
Nice to haves:
Experience Kubernetes, NiFi, Kafka, Kinesis, EKS and/or Grafana.
Ruby
Familiarity with concepts of compiled and interpreted languages and proficiency in at least one strongly typed language
Experience with location data, large time-series datasets and/or advertising data.
Basic GCP expertise
As a Data Engineer at Nickelytics, you will be an integral part of a fast-paced adtech startup. You will serve as the primary technical expert, responsible for leading and contributing to data projects from inception to completion. This role is critical in designing, developing, and maintaining data architectures, ETL processes, and pipelines, often leveraging location intelligence data. The Senior Engineer will collaborate with cross-functional teams, stakeholders, and data professionals to ensure the successful delivery of data projects, aligning with business objectives, efficiently deriving relevant insights, and presenting them in a consumable format for stakeholders and customers.
Key Responsibilities:
Develop a thorough understanding of the data we maintain.
Drive data warehouse design, modeling, security and ETL development.
Develop and maintain ETL routines using orchestration tools.
Expand and optimize data and data pipeline architecture.
Ingest, sanitize and aggregate data from multiple sources.
Develop algorithms, processes and features valuable to downstream processes and teams.
Collaborate with engineers to adopt best practices in data integrity, validation and documentation.
Investigate data issues and respond to client inquiries, clearly communicating any challenges.
Requirements:
5+ years of experience in a professional setting.
Proficiency in Python and SQL.
Experience with data modeling, warehousing, streaming and building ETL/ELT pipelines in cloud environments (preferably on AWS).
Ability to visualize and explain complex data, with and without Jupyter Notebooks.
Experience with orchestration tools for ETLs.
Experience with distributed columnar data stores (ClickHouse, Snowflake or Big Query), relational databases (PostgreSQL), NoSQL databases (Cassandra or MongoDB), data lakes in object storage (S3) and HDFS.
Experience with query engines such as Trino and/or Athena.
Nice to haves:
Experience with various tools and technologies such as Kubernetes, NiFi, Kafka, Kinesis, EKS and/or Grafana.
Familiarity with compiled languages and high performance applications.
Experience with location data, large time-series datasets and/or advertising data.
- Department
- Nickelytics
- Locations
- LATAM
- Remote status
- Fully Remote
Data Engineer
Loading application form
Already working at Kiwibot?
Let’s recruit together and find your next colleague.