Skip to main content Accessibility help
×
Home
  • Access
  • Open access
  • Print publication year: 2020
  • Online publication date: November 2020

11 - Probabilistic Abstract Interpretation: Sound Inference and Application to Privacy

    • Send chapter to Kindle

      To send this chapter to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Available formats
      ×

      Send chapter to Dropbox

      To send content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Dropbox.

      Available formats
      ×

      Send chapter to Google Drive

      To send content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Google Drive.

      Available formats
      ×

Summary

Bayesian probability models uncertain knowledge and learning from observations. As a defining feature of optimal adversarial behaviour, Bayesian reasoning forms the basis of safety properties in contexts such as privacy and fairness. Probabilistic programming is a convenient implementation of Bayesian reasoning but the adversarial setting imposes obstacles to its use: approximate inference can underestimate adversary knowledge and exact inference is impractical in cases covering large state spaces. By abstracting distributions, the semantics of a probabilistic language, and inference, jointly termed probabilistic abstract interpretation, we demonstrate adversary models both approximate and sound. We apply the techniques to build a privacy-protecting monitor and describe how to trade off the precision and computational cost in its implementation while remaining sound with respect to privacy risk bounds.

Related content

Powered by UNSILO