JobsAisle
S

Azure DevOps Engineer (m/f/d)

SupportFinity™

UAEAED 10,000-16,667/moToday
UAEIT & TechnologyFull Time

Skills Required

PythonAzureGitDevops

Job Description

<div><h3>Overview</h3></br><p>Halian | Managed Services, Recruitment Agency&Contract Staffing | Posted Mar 12</p></br><p>Contract | Negotiable | Unknown</p></br><p>We seek a DevOps Engineer to play a pivotal role in maturing our data platform and developer experience. You will be responsible for evolving our DevOps platform, processes and systems with the ultimate aim of enhancing developer velocity and practices, improving code and data quality, and maintaining a well-managed data platform.</p></br></br><h3>Responsibilities</h3></br><ul></br><li>Databricks Asset Bundles (DABs): Enhance our DABs processes to enhance developer experience and velocity.</li></br><li>CI/CD Pipeline Development: Build end-to-end automation in Azure DevOps to test and promote code across our environments (covering ADF, ADB and PowerBI).</li></br><li>Infrastructure as Code (IaC): Automate the provisioning of Databricks workspaces clusters, and projects using Terraform or relevant templates.</li></br><li>Governance: Mature and manage security policies and access controls</li></br><li>ADF Management: Manage our ADF resources, optimizing for reliability and cost.</li></br><li>Monitoring&Cost Optimization: Enhance observability across storage and compute to ensure cost effective operations</li></br></ul></br></br><h3>Qualifications</h3></br><ul></br><li>Bachelor’s degree in a relevant field (e.g., Data Engineering, Software Engineering, Computer Science)</li></br><li>Extensive hands-on experience with Azure Cloud services; strong understanding of resource provisioning, networking, and security in Azure.</li></br><li>Deep experience with Azure Databricks administration (clusters, scaling, job scheduling, monitoring).</li></br><li>Strong experience with IaC (Terraform or Bicep), managing changes via code, and infrastructure versioning.</li></br><li>Competence with version control (Git) and implementing CI/CD pipelines for data workloads/infra.</li></br><li>Familiarity with Spark internals (how compute works, resource allocation, partitioning, memory), to aid in optimizing infrastructure.</li></br><li>Knowledge of cost management on cloud (spot/preemptible VMs, auto-scaling, rightsizing).</li></br><li>Strong scripting/automation skills (Python / PowerShell / Bash).</li></br><li>Experience working with data engineering, analytics and BI teams; understanding of pipeline flows, data dependencies, and common data bottlenecks.</li></br><li>Good to have Databricks or Azure certifications</li></br></ul></br></br><h3>Location</h3></br><p>Abu Dhabi, United Arab Emirates</p></div>#J-18808-Ljbffr