About me
I am Robin Staab, a PhD student at the Secure, Reliable, and Intelligent Systems Lab of ETH Zürich, advised by Prof. Martin Vechev, since July 2023.
Education
- ETH Zurich, 2020 - 2023
- ETH Zurich, 2016 - 2020
M. Sc. Computer Science
B. Sc. Computer Science
Work Experience
- Snyk , Aug-Dec 2021
Machine Learning Engineering Intern, Zurich
Awards
2023 | Willi Studer Prize for Best Master's Degree in Computer Science | Awarded |
Publications
2024
A Synthetic Dataset for Personal Attribute Inference
Hanna Yukhymenko, Robin Staab, Mark Vero, Martin Vechev
NeurIPS Datasets and Benchmarks
2024
Private Attribute Inference from Images with Vision-Language Models
Batuhan Tömekçe, Mark Vero, Robin Staab, Martin Vechev
NeurIPS
2024
Exploiting LLM Quantization
Kazuki Egashira, Mark Vero, Robin Staab, Jingxuan He, Martin Vechev
NeurIPS
2024
NextGenAISafety@ICML24 Oral
Ward: Provable RAG Dataset Inference via LLM Watermarks
Nikola Jovanović, Robin Staab, Maximilian Baader, Martin Vechev
arXiv
2024
COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act
Philipp Guldimann, Alexander Spiridonov, Robin Staab, Nikola Jovanović, Mark Vero, Velko Vechev, Anna Gueorguieva, Mislav Balunović, Nikola Konstantinov, Pavol Bielik, Petar Tsankov, Martin Vechev
arXiv
2024
Discovering Clues of Spoofed LM Watermarks
Thibaud Gloaguen, Nikola Jovanović, Robin Staab, Martin Vechev
arXiv
2024
Watermark Stealing in Large Language Models
Nikola Jovanović, Robin Staab, Martin Vechev
ICML
2024
R2-FM@ICLR24 Oral
From Principle to Practice: Vertical Data Minimization for Machine Learning
Robin Staab, Nikola Jovanović, Mislav Balunović, Martin Vechev
IEEE S&P
2024
Beyond Memorization: Violating Privacy Via Inference with Large Language Models
Robin Staab, Mark Vero, Mislav Balunović, Martin Vechev
ICLR
2024
Spotlight, 2024 PPPM-Award
Back to the Drawing Board for Fair Representation Learning
Angéline Pouget, Nikola Jovanović, Mark Vero, Robin Staab, Martin Vechev
arXiv
2024
Black-Box Detection of Language Model Watermarks
Thibaud Gloaguen, Nikola Jovanović, Robin Staab, Martin Vechev
arXiv
2024
Large Language Models are Advanced Anonymizers
Robin Staab, Mark Vero, Mislav Balunović, Martin Vechev
arXiv
2024
2023
Abstract Interpretation of Fixpoint Iterators with Applications to Neural Networks
Mark Niklas Müller, Marc Fischer, Robin Staab, Martin Vechev
PLDI
2023
2022
Bayesian Framework for Gradient Leakage
Mislav Balunović, Dimitar I. Dimitrov, Robin Staab, Martin Vechev
ICLR
2022