8:20–08:30 | Opening and Welcome |
08:30–9:30 | Keynote I |
Trustworthy AI... for Systems Security Lorenzo Cavallaro (University College London) |
|
9:30-10:00 | Session I: AI for Security (Session Chair: Suman Jana) |
9:30: Is It Overkill? Analyzing Feature-Space Concept Drift in Malware Detectors
Zhi Chen (UIUC), Zhenning Zhang (UIUC), Zeliang Kan (King's College London and University College London), Limin Yang (UIUC), Jacopo Cortellazzi (King's College London and University College London), Feargus Pendlebury (University College London), Fabio Pierazzi (King's College London), Lorenzo Cavallaro (University College London), Gang Wang (UIUC) | |
9:45: Deep Bribe: Predicting the Rise of Bribery in Blockchain Mining with Deep RL
Roi Bar-Zur (Technion, IC3), Danielle Dori (Technion), Sharon Vardi (Technion), Ittay Eyal (Technion, IC3), Aviv Tamar (Technion) | |
10:00–10:30 | Coffee Break |
10:30-11:15 | Session II: Adversarial Machine Learning I (Session Chair: Sanghyun Hong) |
10:30: On the Brittleness of Robust Features: An Exploratory Analysis of Model Robustness and Illusionary Robust Features
Alireza Aghabagherloo (KU Leuven), Rafa Galvez (KU Leuven), Davy Preuveneers (KU Leuven), Bart Preneel (KU Leuven) Best Paper Award | |
10:45: A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space
Thibault Simonetto (University of Luxembourg), Salijona Dyrmishi (University of Luxembourg), Salah Ghamizi (University of Luxembourg), Maxime Cordy (University of Luxembourg), Yves Le Traon (University of Luxembourg) | |
11:00: Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning Models
Nathalie Baracaldo (IBM Research), Farhan Ahmed (IBM Research), Kevin Eykholt (IBM Research), Yi Zhou (IBM Research), Shriti Priya (IBM Research), Taesung Lee (IBM Research), Swanand Kadhe (IBM Research), Mike Tan (The MITRE Corporation), Sridevi Polavaram (The MITRE Corporation), Sterling Suggs (Two Six Technologies), David Slater (Two Six Technologies) | |
11:15-12:00 | Session III: Adversarial Machine Learning II (Session Chair: Fabio Pierazzi) |
11:15: On the Pitfalls of Security Evaluation of Robust Federated Learning
Momin Ahmad Khan (University of Massachusetts Amherst), Virat Shejwalkar (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst), Fatima Muhammad Anwar (University of Massachusetts Amherst) | |
11:30: SafeFL: MPC-friendly framework for Private and Robust Federated Learning
Till Gehlhar (Technical University of Darmstadt), Felix Marx (Technical University of Darmstadt), Thomas Schneider (Technical University of Darmstadt), Ajith Suresh (Technical University of Darmstadt), Tobias Wehrle (Technical University of Darmstadt), Hossein Yalame (Technical University of Darmstadt) | |
11:45: Your Email Address Holds the Key: Understanding the Connection Between Email and Password Security with Deep Learning
Etienne Salimbeni (EPFL), Nina Mainusch (EPFL), Dario Pasquini (EPFL) | |
12:00–13:30 | Lunch Break |
13:30–14:30 | Keynote II |
Adversarial Prompting: Return of the Adversarial Example Eric Wong (University of Pennsylvania) |
|
14:30–15:00 | Refreshment Break |
15:00–16:00 | Panel (Moderator: Yizheng Chen) |
Promises and Challenges of Security in Generative AI A panel discussion with Dan Hendrycks (Center for AI Safety), David Wagner (UC Berkeley), Eric Wong (University of Pennsylvania) |
|
16:00-16:30 | Session IV: Attacks (Session Chair: Yinzhi Cao) |
16:00: Membership Inference Attacks against Diffusion Models
Tomoya Matsumoto (Osaka University), Takayuki Miura (Osaka University / NTT), Naoto Yanai (Osaka University) | |
16:15: On Feasibility of Server-side Backdoor Attacks on Split Learning
Behrad Tajalli (ICIS Radboud University), Oguzhan Ersoy (ICIS Radboud University), Stjepan Picek (ICIS Radboud University) | |
16:30 | Closing remarks |
Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a promising tool for security in academia and industry. On the other hand, the security of deep learning has gained focus in research, the robustness, privacy, and interpretability of neural networks has recently been called into question.
This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security and privacy of deep learning.
DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):
Deep Learning
Computer Security
We accept two types of submissions:
The submitted paper can be up to six pages, plus additional references. To be considered, papers must be received by the submission deadline (see Important Dates).
Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.
For any questions, contact the workshop organizers at dls2023@ieee-security.org
All accepted submissions will be presented at the workshop. The archival papers will be included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.