Job Description Requirements:
- Proficient in working with large data sets, possessing hands-on technology skills to develop robust data architecture.
- Extensive experience in data modeling and database design.
- Minimum of 6 years of practical experience in Spark/Bigdata Tech stack.
- Familiarity with stream processing engines such as Spark Structured Streaming/Flink.
- Competent in analytical processing on Big Data using Spark.
- At least 6 years of expertise in Scala programming.
- Hands-on experience in administering, configuring, monitoring, and tuning Spark workloads, Distributed platforms, and JVM-based systems.
- Minimum of 2 years of experience in cloud deployment, including AWS, Azure, or Google Cloud Platform.
- Demonstrated involvement in at least 2 product deployments of big data technologies like Business Data Lake, NoSQL databases, etc.
- Ability to evaluate and choose among various big data, NoSQL, and analytics tools and technologies.
- Proficiency in architecting and implementing domain-centric big data solutions.
- Capability to formulate architectural decisions and offer technology leadership and guidance.
- Excellent problem-solving abilities, hands-on engineering skills, and effective communication skills.
More Information
- Experience 5-10 Years
Related Jobs
Email Me Jobs Like These
New Job Alert
Never miss a chance!
Let us know your job expectations, so we can find you jobs better!
Showing 1–8 of 47 jobs