JobsAisle
V

Senior Data Engineer (PySpark)

ValueLabs

Dubai, UAEAED 10,000-16,667/moToday
UAEIT & TechnologyFull Time

Skills Required

Sql

Job Description

<div><h3>Qualifications</h3><ul><li>Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.</li><li>8+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.</li></ul><h3>Pyspark Job Description</h3><ul><li>Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.</li><li>Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.</li><li>Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.</li><li>Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.</li><li>Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.</li><li>Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.</li><li>Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.</li><li>Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.</li><li>Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.</li></ul><h3>Technical Skills</h3><ul><li>PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.</li><li>Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.</li><li>Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).</li><li>Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.</li><li>Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.</li><li>Scripting and Automation: Strong scripting skills in Linux.</li></ul></div>#J-18808-Ljbffr