Securing AI Models and ML Pipelines: Best Practices and Pitfalls to Avoid

Securing AI Models and ML Pipelines: Best Practices and Pitfalls to Avoid

Salon H
Mohammad Saheer Padinharayil | Manager | AI & Cloud Security Enthusiast
Tue 01:20PM - 02:00PM, September 9th

As machine learning becomes integral to modern applications, the attack surface for AI systems is rapidly expanding. From data poisoning and model inversion to supply chain vulnerabilities in ML pipelines, the risks are real—and often overlooked. In this session, we'll explore key challenges in securing the AI lifecycle. You’ll learn how adversaries exploit weaknesses in model training, deployment, and inference stages—and how to counter them with practical strategies. We’ll walk through: -Threat modeling for ML pipelines -Secure model training and deployment using ML tools -Proven strategies for securing data inputs, model artifacts, and dependencies -CI/CD best practices for AI/ML, including policy enforcement and SBOMs -Real-world case studies of AI system compromises—and what we can learn from them This talk will give you the actionable insight to evaluate and secure your own AI initiatives, ensuring trust and compliance in an era where AI is not just an asset, but a potential liability.

Add to calendar

Thank You!

Thank you for inquiring about sponsoring swampUP 2024. We’ll be in touch shortly!