Job Title: Senior Associate L2 – Data Engineering
Publicis Sapient Overview:
At Publicis Sapient, we empower our clients to excel in the next era by creating business value through expert strategies, customer-centric experience design, and world-class product engineering. As the future of business evolves towards digital transformation, we recognize the urgent need for innovation across various industries, from financial services to automotive, consumer products, retail, energy, and travel.
To navigate this transformative journey, we seek visionary leaders and dynamic individuals who:
- Embrace innovation and are willing to explore the unknown.
- Demonstrate unwavering optimism and believe in limitless possibilities.
- Possess deep expertise, collaboration skills, and adaptability.
- Reimagine the way businesses operate to enhance the lives of people and the world.
Our success is fueled by:
- Pushing boundaries.
- Collaborating across disciplines.
- Operating within highly agile teams.
- Harnessing the latest technologies and platforms.
If this sounds like you, we invite you to join us in shaping the future.
Job Summary:
As a Senior Associate L2 in Data Engineering, you will translate client requirements into technical designs and implement components for data engineering solutions. Leveraging a deep understanding of data integration and big data design principles, you will develop custom solutions or implement package solutions. You will independently lead design discussions to ensure the overall health of the solution.
This role requires a hands-on technologist with a strong programming background in Java, Scala, or Python. You should have experience in data ingestion, integration, data wrangling, computation, analytics pipelines, and exposure to Hadoop ecosystem components. Additionally, hands-on knowledge of at least one AWS, GCP, or Azure cloud platform is essential.
Role & Responsibilities:
Your role focuses on the design, development, and delivery of solutions involving:
- Data integration, processing, and governance.
- Data storage and computation frameworks, with a focus on performance optimizations.
- Analytics and visualizations.
- Infrastructure and cloud computing.
- Data management platforms.
Key Responsibilities Include:
- Implementing scalable architectural models for data processing and storage.
- Developing functionality for data ingestion from multiple heterogeneous sources in batch and real-time mode.
- Building functionality for data analytics, search, and aggregation.
Experience Guidelines:
Mandatory Experience and Competencies:
- Minimum of 5 years of IT experience with at least 3 years in data-related technologies.
- At least 2.5 years of experience in big data technologies and exposure to at least one cloud platform (AWS/Azure/GCP).
- Hands-on experience with the Hadoop stack and other components required for end-to-end data pipeline development.
- Strong experience in programming languages such as Java, Scala, or Python.
- Working knowledge of NoSQL and MPP data platforms like HBase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, or GCP BigQuery.
- Familiarity with data platform-related services on at least one cloud platform, including IAM and data security.
Preferred Experience and Knowledge (Good to Have):
- Knowledge of traditional ETL tools and database technologies.
- Familiarity with data governance processes and tools.
- Understanding of distributed messaging frameworks and microservices architectures.
- Experience in performance tuning and optimization of data pipelines.
- Proficiency in CI/CD practices and cloud data specialty certifications.
Personal Attributes:
- Excellent written and verbal communication skills.
- Strong articulation abilities.
- Collaborative team player.
- Self-motivated with minimal oversight required.
- Effective prioritization and multitasking skills.
- Process-oriented mindset with the ability to define and implement processes.
More Information
- Experience 5-10 Years