Job Description

As a Senior Data Engineer, you will be responsible for translating client requirements into technical designs and implementing components for data engineering solutions. Leveraging a deep understanding of data integration and big data design principles, you will develop custom solutions or implement package solutions. Additionally, you will independently lead design discussions to ensure the overall solution’s integrity.

Your Impact:

  • Data Ingestion, Integration, and Transformation
  • Data Storage and Computation Frameworks, Performance Optimization
  • Analytics & Visualizations
  • Infrastructure & Cloud Computing
  • Data Management Platforms

Key Responsibilities:

  • Develop functionality for data ingestion from multiple heterogeneous sources in batch & real-time.
  • Create functionality for data analytics, search, and aggregation.
  • Implement solutions for data storage and computation frameworks, with a focus on performance optimization.

Your Skills & Experience:

  • Minimum of 4 years of experience in Big Data technologies.
  • Hands-on experience with the Hadoop stack, including HDFS, Sqoop, Kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, Hive, Oozie, Airflow, and other components necessary for building end-to-end data pipelines.
  • Working knowledge of real-time data pipelines is an added advantage.
  • Strong experience in at least one programming language: Java, Scala, or Python (Java preferred).
  • Hands-on working knowledge of NoSQL and MPP data platforms such as HBase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc.
  • Well-versed in and working knowledge of data platform-related services on Azure.

Set Yourself Apart With:

  • Good knowledge of traditional ETL tools (Informatica, Talend, etc.) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience.
  • Understanding of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc.
  • Familiarity with distributed messaging frameworks like ActiveMQ, RabbitMQ, Solace, search & indexing, and Microservices architectures.
  • Experience in performance tuning and optimization of data pipelines.
  • Cloud data specialty and other related Big data technology certifications.

More Information

Apply for this job
Share this job