mia

A library for running membership inference attacks (MIA) against machine learning models.

These are attacks against privacy of the training data. In MIA, an attacker tries to guess whether a given example was used during training of a target model or not, only by querying the model. See more in the paper by Shokri et al.

This library:

  • Implements the original shadow model attack
  • Is customizable, can use any scikit learn’s Estimator-like object as a shadow or attack model
  • Is tested with Keras and PyTorch

Indices and tables