Job Description
Publicis Sapient Overview:
Publicis Sapient serves as a digital transformation partner, assisting established organizations in achieving their digitally empowered future in both their operational practices and customer interactions. Our approach combines a start-up mentality with modern methodologies, integrating strategy, consulting, customer experience, agile engineering, and innovative problem-solving. With a global team of over 20,000 professionals across 53 offices worldwide, we are committed to accelerating our clients’ business growth by designing products and services that truly resonate with their customers.
Job Summary:
As a Senior Associate L1 in Data Engineering, your responsibilities include technical design and the implementation of components for data engineering solutions. Leveraging a deep understanding of data integration and big data design principles, you will develop custom solutions or implement package solutions. This role requires a hands-on technologist with a strong programming background in Spark/Pyspark and Java/Scala/Python, along with experience in data ingestion, integration, data wrangling, computation, analytics pipelines, and exposure to the Hadoop ecosystem components. Additionally, hands-on knowledge of Google Cloud Platform (GCP) is essential.
Role & Responsibilities:
Your role revolves around the design, development, and delivery of solutions related to:
- Data ingestion, integration, and transformation
- Data storage and computation frameworks, with a focus on performance optimizations
- Analytics and visualizations
- Infrastructure and cloud computing
- Data management platforms
Key responsibilities also include building functionality for data ingestion from multiple heterogeneous sources in batch and real-time, as well as for data analytics, search, and aggregation.
Experience Guidelines:
Mandatory Experience and Competencies:
- Minimum of 4+ years of IT experience with at least 2+ years in data-related technologies.
- At least 2+ years of experience in big data technologies.
- Hands-on experience with the Hadoop stack, including components such as HDFS, Sqoop, Kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, Hive, Oozie, Airflow, and real-time data pipelines.
- Strong experience in at least one programming language among Java, Scala, or Python, with Java being preferable.
- Hands-on working knowledge of NoSQL and MPP data platforms like HBase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, and GCP BigQuery.
Preferred Experience and Knowledge (Good to Have):
- Familiarity with traditional ETL tools and database technologies, with hands-on experience.
- Knowledge of data governance processes and tools like Collibra, Alation, etc.
- Understanding of distributed messaging frameworks, search & indexing, and microservices architectures.
- Experience in performance tuning and optimization of data pipelines.
- Proficiency in CI/CD practices, including infrastructure provisioning on the cloud, automated build & deployment pipelines, and code quality.
- Working knowledge of data platform-related services on at least one cloud platform, IAM, and data security.
- Cloud data specialty and other relevant Big Data technology certifications.
Personal Attributes:
- Strong written and verbal communication skills.
- Articulate communication abilities.
- Collaborative team player.
- Self-starter requiring minimal oversight.
- Ability to prioritize and manage multiple tasks.
- Process-oriented mindset with the ability to define and establish processes.
More Information
- Experience 5-10 Years