Discover shadow and cloud-native assets and accurately classify data
Secure sensitive data everywhere from hybrid multicloud to SaaS
Establish controls for safe adoption of AI technologies including GenAI
Assess & improve compliance with security best practices frameworks
Ali Salman
Ali Salman is a Principal Solutions Architect with a master’s degree in cybersecurity and more than 18 years of experience designing secure, scalable architectures for enterprise customers. His work spans cloud and data center transformation, SaaS backup and recovery for Microsoft 365 and Azure/AWS workloads, disaster recovery planning and orchestration, and Kubernetes platform resilience.
An expert in virtualization and infrastructure modernization, from VMware and Hyper‑V to Proxmox, Ali also advises organizations on cybersecurity architecture, including Zero Trust frameworks, identity management, segmentation, and system hardening. He focuses on helping enterprises strengthen threat management and incident readiness, implement secure AI architectures, and align technology strategy with governance, compliance, and business resilience goals.
Before joining Veeam, Ali held senior technical and leadership roles across major organizations in the Middle East and South Asia, including OSN (Dubai), ITCS, Inbox Business Technologies, and Mobilink GSM (VEON). In these positions, he led enterprise infrastructure consulting, platform architecture, and security initiatives that supported large‑scale digital transformation and operational reliability.
Ali has spoken at leading industry forums, including the CXO Forum, FPCCI Committee on Cyber Security, and AAI, where he shares insights on secure design patterns, operational readiness, and real‑world implementation across hybrid and multi‑cloud environments.
A deep dive into LLM Firewalls: Securing AI Systems in the Age of Generative Intelligence is a practical, enterprise-focused guide to defending modern AI systems against the real risks that traditional controls were never designed to handle: prompt injection, indirect attacks through RAG knowledge sources, unsafe and untrusted outputs, sensitive data leakage, and excessive agent autonomy.