Data Security and Privacy

The Data Security and Privacy group specializes in developing algorithms and tools for data protection, privacy in machine learning, data security in the cloud and more.

AI Privacy

Many privacy regulations, including GDPR, mandate that organizations abide by certain privacy principles when processing personal information. This is also relevant for AI models trained using personal data. For example: Data minimization requires that only data necessary to fulfill a certain purpose be collected. The right to be forgotten enables data subjects to request that their personal data be deleted from a certain organization. Organizations may also be required to perform a privacy risk assessment for new services being released. However, anonymized data, which can no longer be retraced to a specific individual, is typically exempt from all of these obligations.

Why is this relevant to machine learning?

Recent studies show that a malicious third party with access to a trained ML model, even without access to the training data itself, can still reveal sensitive, personal information about the people whose data was used to train the model. For example, it may be possible to reveal whether or not a person’s data is part of the model’s training set (membership inference), or even infer sensitive atributes about them, such as their salary (attribute inference).

It’s often difficult, however, to comply with such privacy regulations when dealing with AI. Advanced machine learning algorithms, such as deep neural networks, tend to consume large amounts of data to generate predictions or classifications. These algorithms often result in a “black box” model, where it is difficult to derive exactly which data influenced the decision and how.

We are currently researching new techniques to enable AI-based solutions to adhere to such privacy requirements. These techniques include:

  1. Data minimization for machine learning models – This technique helps to reduce the amount and granularity of features used by machine learning algorithms to perform classification or prediction, by either removal (suppression) or generalization. The process is tailored to the machine learning model at hand, thus reducing the negative effect on model accuracy. Once the minimized feature set is determined, any newly collected data for analysis can be minimized before applying the model to it. This method is focused on preserving the privacy of individuals for whom predictions will be made by the model, i.e., on runtime data. For more information, check out our video (see below) or look for our open-source tool.
  2.  
     
  3. Machine learning model anonymization – This method creates a model-based, tailored anonymization scheme to anonymize training data before using it to train an ML model. Using knowledge encoded within a model allows us to derive an anonymization that minimizes the effect on the model’s accuracy. Once the new model is trained on the anonymized dataset, it can be used, shared, and published freely. The focus is on enabling enterprises to expose/share the analytics model, while protecting the individuals whose data was used to train the model, thus adhering to privacy regulations. For more information, check out our video (see below) and read our blog post.
  4.  
     
  5. Privacy risk assessment for machine learning models – This enables comparing and choosing between different ML models based not only on accuracy but also on privacy risk. We are studying ways to assess and quantify the privacy risk of these models, as well as reduce their privacy risks by directing their development processes to produce models that rely less on sensitive data. There are several risk factors that can be taken into account, such as privacy risks for training data (e.g., membership inference and attribute inference attacks) and privacy risks of the general population (regardless of participation in the training set). The risk level also depends on the level of sensitivity of the features used to train the model. For more information, read our blog post or check out some of our privacy attacks in the Adversarial Robustness Toolbox inference module.
  6. Forgetting from ML models – This approach enables selectively “forgetting” specific samples from an existing, trained neural network model in a way that is much more efficient than retraining from scratch, and with minimal effect on the model’s accuracy.

Here is a high-level schematic depiction of the data minimization process:

Here is a high-level schematic depiction of the anonymization process:

 

Abigail Goldsteen, Data Security and Privacy, IBM Research - Haifa

Abigail Goldsteen,
Data Security and Privacy,
IBM Research - Haifa