Secure packaging for AI models
10-27, 15:15–15:40 (Europe/Berlin), Main stage

AI models (especially LLMs) are now being released at a never seen before frequency. At the same time, supply chain attacks increase YoY by more than 700%. Coupling these two facts together reveals a shocking perspective: it is very possible for bad actors to infect unsuspecting host that want to benefit from the AI explosion. Fortunately, by drawing analogies between training AI models and building traditional software artifacts, we could build solutions to package ML models such that the majority of the supply chain security risks are alleviated.


Looking at the traditional software development life cycle and associated supply chain risks we see that there are solutions (e.g., artifact signing, generating provenance, generating software bill of materials) that would enforce transparency, entailing a reduction in supply chain compromises. We can do the same for ML models, across different ML frameworks and model hubs.

Specifically, we will present a solution that handles signing models with Sigstore. This has 2 benefits. First, users can preferentially download these models from model hubs (such as Kaggle, Hugging Face) that display the presence of the signature (very similar to how security conscious users will download a binary artifact's signature to check it before using the artifact). The other benefit comes from transparency: since the signatures are auditable, bad behavior can be discovered and all models from bad actors can be identified.

A further component of reducing supply chain compromises is adding support for SLSA for ML. This is different than SLSA for binary artifacts in the fact that it will need to take into account datasets, data manipulations (preprocessing), using pretrained models (fine tuning, transfer learning) and the ML frameworks themselves. But, it also comes up with a large number of benefits that can be summarized in an increased ability to react to compromise and an enhanced way to provide evidence to regulators that all regulation has been followed.

Laurent is a security engineer in the Open Source Security Team (GOSST) at Google. His team works in collaboration with the open-source community and the OpenSSF on novel security solutions, such as Scorecards, Allstar, Sigstore, SLSA, OSS-Fuzz, OSV, etc.

This speaker also appears in:

Mihai Maruseac is a member of Google Open Source Security team (GOSST), working on Supply Chain Security, mainly on GUAC. Before joining GOSST, Mihai created the TensorFlow Security team after joining Google, moving from a startup to incorporate Differential Privacy (DP) within Machine Learning (ML) algorithms. Mihai has a PhD in Differential Privacy from UMass Boston.