6th Deep Learning Security
and Privacy Workshop
co-located with the 44th IEEE Symposium on Security and Privacy
May 25, 2023
Photo: Pixabay

Keynotes

Trustworthy AI... for Systems Security
Lorenzo Cavallaro, University College London

Abstract:
No day goes by without reading machine learning (ML) success stories across various application areas. Systems security is no exception, where ML's tantalizing performance leave one to wonder whether there are any unsolved problems left. However, machine learning has no real clairvoyant abilities and once the magic wears off, we're left in uncharted territory. Is machine learning truly capable of ensuring systems security? In this talk, we will highlight the importance of reasoning beyond mere performance by examining the consequences of adversarial attacks and distribution shifts in realistic settings. When relevant, we will also delve into behind-the-scenes aspects to encourage reflection on the reproducibility crisis. Our goal is to foster a deeper understanding of machine learning's role in systems security and its potential for future advancements.

Bio: Lorenzo Cavallaro is a Full Professor of Computer Science at University College London (UCL), where he leads the Systems Security Research Lab. He grew up on pizza, spaghetti, and Phrack, and soon developed a passion for underground and academic research. Lorenzo's research vision is to enhance the effectiveness of machine learning for systems security in adversarial settings. He works with his team to investigate the interplay between program analysis abstractions, representations, and ML models, and their crucial role in creating Trustworthy AI for Systems Security. Despite his love for food, Lorenzo finds his Flow in science, music, and family.

Adversarial Prompting: Return of the Adversarial Example
Eric Wong, University of Pennsylvania

Abstract: Nearly two decades have passed since early adversarial examples were created to attack spam filtering. Researchers have since questioned their practicality and usefulness as a real-world security threat in deep learning. What role can adversarial machine learning take today? In this talk, we will discuss an emerging threat called adversarial prompting: a real-world attack vector for prompt-based generative AI. Adversarial prompts are currently an active security threat with the potential to circumvent safeguards intended to sanitize generated outputs. This talk will explore threat models and black box adversarial attacks for the prompting setting, demonstrating how it can be used to induce targeted behaviors in generative AI.

Bio: Eric Wong is an Assistant Professor in the Department of Computer and Information Science at the University of Pennsylvania. His work focuses on the foundations of robust systems, building on elements of machine learning and optimization to debug, understand, and develop reliable systems. He is a 2020 Siebel Scholar and received an honorable mention for his thesis on the robustness of deep networks to adversarial examples at Carnegie Mellon University advised by Zico Kolter. Prior to joining UPenn, he was a postdoc at CSAIL MIT advised by Aleksander Madry.

Program (Tentative) - May 25, 2023

The following times are on PT time zone.
8:20–08:30 Opening and Welcome
08:30–9:30 Keynote I
Trustworthy AI... for Systems Security
Lorenzo Cavallaro (University College London)
9:30-10:00 Session I: AI for Security (Session Chair: Suman Jana)
9:30: Is It Overkill? Analyzing Feature-Space Concept Drift in Malware Detectors
Zhi Chen (UIUC), Zhenning Zhang (UIUC), Zeliang Kan (King's College London and University College London), Limin Yang (UIUC), Jacopo Cortellazzi (King's College London and University College London), Feargus Pendlebury (University College London), Fabio Pierazzi (King's College London), Lorenzo Cavallaro (University College London), Gang Wang (UIUC)
9:45: Deep Bribe: Predicting the Rise of Bribery in Blockchain Mining with Deep RL
Roi Bar-Zur (Technion, IC3), Danielle Dori (Technion), Sharon Vardi (Technion), Ittay Eyal (Technion, IC3), Aviv Tamar (Technion)
10:00–10:30 Coffee Break
10:30-11:15 Session II: Adversarial Machine Learning I (Session Chair: Sanghyun Hong)
10:30: On the Brittleness of Robust Features: An Exploratory Analysis of Model Robustness and Illusionary Robust Features
Alireza Aghabagherloo (KU Leuven), Rafa Galvez (KU Leuven), Davy Preuveneers (KU Leuven), Bart Preneel (KU Leuven)
Best Paper Award
10:45: A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space
Thibault Simonetto (University of Luxembourg), Salijona Dyrmishi (University of Luxembourg), Salah Ghamizi (University of Luxembourg), Maxime Cordy (University of Luxembourg), Yves Le Traon (University of Luxembourg)
11:00: Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning Models
Nathalie Baracaldo (IBM Research), Farhan Ahmed (IBM Research), Kevin Eykholt (IBM Research), Yi Zhou (IBM Research), Shriti Priya (IBM Research), Taesung Lee (IBM Research), Swanand Kadhe (IBM Research), Mike Tan (The MITRE Corporation), Sridevi Polavaram (The MITRE Corporation), Sterling Suggs (Two Six Technologies), David Slater (Two Six Technologies)
11:15-12:00 Session III: Adversarial Machine Learning II (Session Chair: Fabio Pierazzi)
11:15: On the Pitfalls of Security Evaluation of Robust Federated Learning
Momin Ahmad Khan (University of Massachusetts Amherst), Virat Shejwalkar (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst), Fatima Muhammad Anwar (University of Massachusetts Amherst)
11:30: SafeFL: MPC-friendly framework for Private and Robust Federated Learning
Till Gehlhar (Technical University of Darmstadt), Felix Marx (Technical University of Darmstadt), Thomas Schneider (Technical University of Darmstadt), Ajith Suresh (Technical University of Darmstadt), Tobias Wehrle (Technical University of Darmstadt), Hossein Yalame (Technical University of Darmstadt)
11:45: Your Email Address Holds the Key: Understanding the Connection Between Email and Password Security with Deep Learning
Etienne Salimbeni (EPFL), Nina Mainusch (EPFL), Dario Pasquini (EPFL)
12:00–13:30 Lunch Break
13:30–14:30 Keynote II
Adversarial Prompting: Return of the Adversarial Example
Eric Wong (University of Pennsylvania)
14:30–15:00 Refreshment Break
15:00–16:00 Panel (Moderator: Yizheng Chen)
Promises and Challenges of Security in Generative AI
A panel discussion with Dan Hendrycks (Center for AI Safety), David Wagner (UC Berkeley), Eric Wong (University of Pennsylvania)
16:00-16:30 Session IV: Attacks (Session Chair: Yinzhi Cao)
16:00: Membership Inference Attacks against Diffusion Models
Tomoya Matsumoto (Osaka University), Takayuki Miura (Osaka University / NTT), Naoto Yanai (Osaka University)
16:15: On Feasibility of Server-side Backdoor Attacks on Split Learning
Behrad Tajalli (ICIS Radboud University), Oguzhan Ersoy (ICIS Radboud University), Stjepan Picek (ICIS Radboud University)
16:30 Closing remarks

Call for Papers

Important Dates

  • Paper and talk submission deadline: Feb 9, 2023, 11:59 PM (AoE, UTC-12) EXTENDED Feb 2, 2023, 11:59 PM (AoE, UTC-12)
  • Acceptance notification: Mar 17, 2023 NEW DATE Mar 1, 2023
  • Camera-ready due: Mar 31, 2023
  • Workshop: May 25, 2023

Overview

Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a promising tool for security in academia and industry. On the other hand, the security of deep learning has gained focus in research, the robustness, privacy, and interpretability of neural networks has recently been called into question.

This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security and privacy of deep learning.

Topics of Interest

DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):

Deep Learning

  • Deep learning for program embedding and similarity
  • Deep program learning
  • Modern deep NLP
  • Recurrent network architectures
  • Neural networks for graphs
  • Neural Turing machines
  • Semantic knowledge-bases
  • Generative adversarial networks
  • Relational modeling and prediction
  • Deep reinforcement learning
  • Attacks against deep learning
  • Resilient and explainable deep learning
  • Robustness, privacy, and interpretability of deep learning

Computer Security

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification, analysis, and similarity
  • Data anonymization/ de-anonymization
  • Security in social networks
  • Vulnerability discovery

Submission Guidelines

We accept two types of submissions:

  • Archival, full-length papers
  • Non-Archival, presentations for published novel work

The submitted paper can be up to six pages, plus additional references. To be considered, papers must be received by the submission deadline (see Important Dates).

Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.

For any questions, contact the workshop organizers at dls2023@ieee-security.org

Presentation Form

All accepted submissions will be presented at the workshop. The archival papers will be included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Submission Site

https://hotcrp.dls2023.ieee-security.org/

Committee

Program Chairs

Steering Committee

Program Committee

  • Baris Coskun, Amazon
  • Chao Zhang, Tsinghua University
  • Christian Wressnegger, Karlsruhe Institute of Technology (KIT)
  • Daniel Arp, TU Berlin
  • David Evans, University of Virginia
  • Davide Maiorca, University of Cagliari
  • Erwin Quiring, International Computer Science Institute (ICSI) Berkeley
  • Fabio Pierazzi, King's College London
  • Feargus Pendlebury, Meta
  • Giovanni Apruzzese, University of Liechtenstein
  • Heng Yin, University of California, Riverside
  • Kexin Pei, Columbia University
  • Konrad Rieck, TU Braunschweig
  • Matthew Jagielski, Google
  • Min Du, Palo Alto Networks
  • Mohammadreza Ebrahimi, University of South Florida
  • Mu Zhang, University of Utah
  • Nicholas Carlini, Google
  • Philip Tully, Google
  • Pin-Yu Chen, IBM Research
  • Sagar Samtani, Indiana University
  • Samual Marchal, WithSecure
  • Sanghyun Hong, Oregon State University
  • Scott Coull, Mandiant
  • Shirin Nilizadeh, University of Texas at Arlington
  • Teodora Baluta, National University of Singapore
  • Tomas Pevny, Czech Technical University in Prague
  • Tummalapalli S Reddy, University of Texas at Arlington
  • Varun Chandrasekaran, Microsoft Research
  • Yacin Nadji, Corelight Inc
  • Yang Zhang, CISPA Helmholtz Center for Information Security
  • Yevgeniy Vorobeychik, Washington University in St. Louis
  • Yinzhi Cao, Johns Hopkins University
  • Ziqi Yang, Zhejiang University