BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Modeling\, Quantifying\, and Limiting Adversary Knowledge - Piotr 
 Mardziel\, University of Maryland\, College Park
DTSTART:20150213T100000Z
DTEND:20150213T110000Z
UID:TALK57920@talks.cam.ac.uk
CONTACT:Microsoft Research Cambridge Talks Admins
DESCRIPTION:Users participating in online services are required to relinqu
 ish control over potentially sensitive personal information\, exposing the
 m to intentional or unintentional miss-use of said information by the serv
 ice providers. Users wishing to avoid this must either abstain from often 
 extremely useful services\, or provide false information which is usually 
 contrary to the terms of service they must abide by.\nAn attractive middle
 -ground alternative is to maintain control in the hands of the users and p
 rovide a mechanism with which information that is necessary for useful ser
 vices can be queried. Users need not trust any external party in the manag
 ement of their information but are now faced with the problem of judging w
 hen queries by service providers should be answered or when they should be
  refused due to revealing too much sensitive information.\n\nJudging query
  safety is difficult. Two queries may be benign in isolation but might rev
 eal more than a user is comfortable with in combination. Additionally mali
 cious adversaries who wish to learn more than allowed might query in a man
 ner that attempts to hide the flows of sensitive information. Finally\, us
 ers cannot rely on human inspection of queries due to its volume and the g
 eneral lack of expertise.\n\nThis work tackles the automation of query jud
 gment\, giving the self-reliant user a means with which to discern benign 
 queries from dangerous or exploitive ones. The approach is based on explic
 it modeling and tracking of the knowledge of adversaries as they learn abo
 ut a user through the queries they are allowed to observe. The approach qu
 antifies the absolute risk a user is exposed\, taking into account all the
  information that has been revealed already when determining to answer a q
 uery. Proposed techniques for approximate but sound probabilistic inferenc
 e are used to tackle the tractability of the approach\, letting the user t
 radeoff utility (in terms of the queries judged safe) and efficiency (in t
 erms of the expense of knowledge tracking)\, while maintaining the guarant
 ee that risk to the user is never underestimated. We apply the approach to
  settings where user data changes over time and settings where multiple us
 ers wish to pool their data to perform useful collaborative computations w
 ithout revealing too much information.\n\nBy addressing one of the major o
 bstacles preventing the viability of personal information control\, this w
 ork brings the attractive proposition closer to reality.\n
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station Road\, Cambridge
 \, CB1 2FB
END:VEVENT
END:VCALENDAR
