Role Summary:
We are seeking a Senior Data Engineer to join our team and contribute to the design, development, testing, and maintenance of scalable data platform solutions. You will be responsible for crafting custom ETL solutions to tackle complex business challenges.
Key Responsibilities:
- Design, build, and maintain efficient and scalable data pipelines for data collection, transformation, and loading.
- Develop Logical Data Models and Physical Data Models, including designs for data warehouses and data marts.
- Implement various tests to identify data quality issues such as duplicates, null counts, and data drifts.
- Proactively identify opportunities for pipeline optimization.
- Collaborate with stakeholders to gather requirements and understand project scopes.
- Coordinate with upstream and downstream teams to ensure seamless execution of data pipelines.
- Maintain business-critical metric dashboards.
Key Requirements:
- Bachelor’s degree in Computer Science or equivalent, with at least 3 years of experience.
- Proficiency in one or more programming languages such as Python, Go, or Scala.
- Strong familiarity with Hive, Presto, Spark, PySpark, and Apache Airflow frameworks.
- Understanding of large-scale distributed systems, including multi-tier architectures, application security, monitoring, and storage systems.
- Exposure to streaming data use cases using technologies like Kafka or structured streaming.
- Solid grasp of algorithms and data structures.
- Excellent communication skills.
- Mandatory Skills: Spark, Scala/Python, SQL, AWS.
We Assure You:
- A collaborative and supportive work culture where your ideas and opinions are valued.
- Opportunities for professional growth in a thriving and sustainable industry.
More Information
- Experience 5-10 Years