JobsAisle
M

Artificial Intelligence Consultant

malomatia

Doha, QatarQAR 8,400-23,100/moToday
QatarIT & TechnologyFull Time

Skills Required

PythonAwsAzureExcelMachine LearningCommunicationLeadershipSupply ChainSafety

Job Description

You will lead the security architecture, governance, and assurance of our AI and GenAI platforms across the organization. This includes classical ML systems and modern LLM/agentic architectures. You will define how models are designed, deployed, monitored, and defended; ensure they are robust, explainable, privacy-preserving, and compliant; and act as the go-to expert for AI security across products, data, cloud, and security teams. Key Responsibilities AI Security Architecture Design and review secure architectures for ML/LLM workloads (training, fine-tuning, inference, RAG, agents, plugins, tool calling, APIs). Define reference architectures for on prem, hybrid, and cloud AI platforms (Azure OpenAI, AWS Bedrock, GCP Vertex, self hosted models, etc.). Perform AI specific threat modeling (e.g., data poisoning, model theft, prompt injection, jailbreaks, supply chain, inference attacks) using MAESTRO or similar framework. Align controls with leading frameworks: NIST AI RMF, ISO/IEC 27001, ISO/IEC 27090, ISO/IEC 42001, OWASP GenAI / LLM Top 10, CSA & MITRE ATLAS. Security Control Design & Implementation Define and oversee implementation of controls for: Model & artifact integrity (signing, SBOM, secure registries). Access control, isolation, rate limiting, and abuse detection. Secure prompt engineering and guardrail policies. Security prompt monitoring to detect ongoing attacks. Data security: data classification, data masking and DLP. Reliability & Trustworthiness Partner with engineering and data science to embed robustness, observability, fallback strategies, and evaluation pipelines (safety, bias, toxicity, hallucination monitoring). Contribute to SLOs/SLAs for AI systems, including security and reliability KPIs. Embedded AI security into CI/CD: scanning, dependency checks, policy as code, red teaming AI components pre and post release. Perform red teaming activities to abuse and force AI hallucinations. Develop and maintain AI specific playbooks (prompt abuse, model exfiltration, data leakage, compromised agents). Lead or support AI red/blue/purple teaming exercises using frameworks like MITRE ATLAS. Advise on alignment with emerging AI regulations and standards (e.g., EU AI Act, regional laws, internal AI use policies). Define internal policies on responsible AI, data usage, model lifecycle, and third party AI risk management. Stakeholder Leadership Run workshops, training, and awareness for engineering, security, and business teams. Required Qualifications & Experience 8-12+ years in Cybersecurity, with 3-5+ years focused on AI/ML or data platforms (can be overlapping). Hands on experience with: Cloud platforms (Azure, AWS, GCP) and their AI services. At least one Agentic/GenAI stack (e.g., Transformers, LangChain/LlamaIndex, vector DBs, model gateways, MLOps platforms). Proven track record designing or reviewing secure architectures for ML pipelines, LLM/RAG systems, or agentic/automation platforms. Strong understanding of: Cryptography, identity & access management, network & app security. Data protection & privacy (PII, PHI, DPIA concepts). Experience working with or mapping to frameworks/standards such as NIST AI RMF, ISO/IEC 27001, ISO/IEC 42001, SOC 2, OWASP Top 10 & OWASP GenAI/LLM Top 10, MITRE ATT&CK/ATLAS, CSA guidance. Excellent communication skills: able to translate complex AI risks into clear business and technical requirements. Core Technical & Domain Skills Model types (LLMs, encoders, diffusion, classical ML), training/inference flows. AI Penetration testing: Prompt injection, model tampering, data poisoning, output manipulation, exfiltration, shadow AI, insecure plugins/integrations. Logging, tracing, safety & quality metrics for AI systems. Health and latency monitoring. Strong scripting/automation (Python preferred) for security tooling and assessments. Certifications & Training (Required / Highly Desirable) One or more core security certifications. One or more cloud security certifications: CCSK (Cloud Security Alliance), AWS Security Specialty, Azure Security Engineer, Google Professional Cloud Security Engineer. AI & AI Security specific training / certs (or commitment to obtain within 6-12 months): NIST AI RMF or ISO/IEC 42001 focused training. Certified AI Security Professional (CAISP / similar offerings that cover LLM/GenAI threats, MITRE ATLAS, OWASP LLM Top 10). OffSec / similar LLM & AI Red Teaming or GenAI security courses. Vendor AI/ML certifications (Azure AI Engineer, AWS Machine Learning Specialty, GCP ML Engineer) with demonstrated security emphasis. GIAC/GWAPT/GXPN/GCLD or similar offensive / cloud / Appsec certs. Secure MLOps & ML supply chain security. OWASP GenAI / LLM Top 10, MITRE ATLAS, CSA MAESTRO & other AI risk frameworks.