About me

Mark Niklas Müller is a PostDoc at the Secure, Reliable, and Intelligent Systems Lab of ETH Zürich and advised by Prof. Martin Vechev. Mark’s research focuses on provable guarantees for machine learning models. This includes both deterministic and probabilistic certification methods for a diverse range of architectures and certified training methods. He has led the research on these topics at SRI Lab for the last two years, including two industry collaborations, co-organized the Verification of Neural Networks Competition 2022 (VNN-Comp’22), and was the lead organizer for the 2nd Workshop on Formal Verification of Machine Learning (WFVML'23) at ICML’23 this year. More recently, he has been working on decoding and contamination detection for LLMs.

Education

  • ETH Zurich, January 2020 - April 2024
    Doctoral Student in the Department of Computer Science
  • ETH Zurich, September 2019 - October 2020
    Visiting Student in the Department of Computer Science
  • University of Stuttgart, October 2018 - October 2020
    M.Sc. in Aerospace Engineering
  • Best Master's degree in Aerospace Engineering
  • University of Stuttgart, October 2014 - April 2018
    B.Sc. in Aerospace Engineering
  • Best Bachelor's degree in Aerospace Engineering

Publications

2024

ConStat: Performance-Based Contamination Detection in Large Language Models
Jasper Dekoninck, Mark Niklas Müller, Martin Vechev
NeurIPS 2024
SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents
Niels Mündler, Mark Niklas Müller, Jingxuan He, Martin Vechev
NeurIPS 2024
Prompt Sketching for Large Language Models
Luca Beurer-Kellner, Mark Niklas Müller, Marc Fischer, Martin Vechev
ICML 2024
DAGER: Exact Gradient Inversion for Large Language Models
Ivo Petrov, Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev
ArXiv 2024
Understanding Certified Training with Interval Bound Propagation
Yuhao Mao, Mark Niklas Müller, Marc Fischer, Martin Vechev
ICLR 2024
Expressivity of ReLU-Networks under Convex Relaxations
Maximilian Baader*, Mark Niklas Müller*, Yuhao Mao, Martin Vechev
ICLR 2024 * Equal contribution
SPEAR: Exact Gradient Inversion of Batches in Federated Learning
Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev
ArXiv 2024
Overcoming the Paradox of Certified Training with Gaussian Smoothing
Stefan Balauca, Mark Niklas Müller, Yuhao Mao, Maximilian Baader, Marc Fischer, Martin Vechev
arXiv 2024
Evading Data Contamination Detection for Language Models is (too) Easy
Jasper Dekoninck, Mark Niklas Müller, Maximilian Baader, Marc Fischer, Martin Vechev
arXiv 2024

2023

Automated Classification of Model Errors on ImageNet
Momchil Peychev*, Mark Niklas Müller*, Marc Fischer, Martin Vechev
NeurIPS 2023 * Equal contribution
Connecting Certified and Adversarial Training
Yuhao Mao, Mark Niklas Müller, Marc Fischer, Martin Vechev
NeuIPS 2023
Abstract Interpretation of Fixpoint Iterators with Applications to Neural Networks
Mark Niklas Müller, Marc Fischer, Robin Staab, Martin Vechev
PLDI 2023
Efficient Certified Training and Robustness Verification of Neural ODEs
Mustafa Zeqiri, Mark Niklas Müller, Marc Fischer, Martin Vechev
ICLR 2023
Certified Training: Small Boxes are All You Need
Mark Niklas Müller*, Franziska Eckert*, Marc Fischer, Martin Vechev
ICLR 2023 * Equal contribution Spotlight
First Three Years of the International Verification of Neural Networks Competition (VNN-COMP)
Christopher Brix, Mark Niklas Müller, Stanley Bak, Changliu Liu, Taylor T. Johnson
STTT ExPLAIn 2023

2022

The Third International Verification of Neural Networks Competition (VNN-COMP 2022): Summary and Results
Mark Niklas Müller*, Christopher Brix*, Stanley Bak, Changliu Liu, Taylor T. Johnson
arXiv 2022 * Equal contribution
(De-)Randomized Smoothing for Decision Stump Ensembles
Miklós Z. Horváth*, Mark Niklas Müller*, Marc Fischer, Martin Vechev
NeurIPS 2022 * Equal contribution
Robust and Accurate - Compositional Architectures for Randomized Smoothing
Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev
SRML@ICLR 2022
Boosting Randomized Smoothing with Variance Reduced Classifiers
Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev
ICLR 2022 Spotlight
Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound
Claudio Ferrari, Mark Niklas Müller, Nikola Jovanović, Martin Vechev
ICLR 2022
PRIMA: General and Precise Neural Network Certification via Scalable Convex Hull Approximations
Mark Niklas Müller*, Gleb Makarchuk*, Gagandeep Singh, Markus Püschel, Martin Vechev
POPL 2022 * Equal contribution

2021

Certify or Predict: Boosting Certified Robustness with Compositional Architectures
Mark Niklas Müller, Mislav Balunović, Martin Vechev
ICLR 2021

Invited Talks and Academic Service

Supervised students

Teaching

Work experience

  • G-Research, London, UK, 06/2023 - 09/2023
    Quantitative Research Summer Intern
  • Dr. Ing. h.c. F. Porsche AG, Weissach, DE, 11/2018 - 08/2019
    Working Student
  • Bosch Rexroth AG, Stuttgart, DE, 09/2018 - 10/2018
    Data Science Intern
  • Mercedes-AMG Petronas Formula One Team, Brackley, UK, 07/2017 - 07/2018
    Industrial Placement - Aerodynamicist

Awards

  • LRT Award for the best overall Master degree in Aerospace Engineering
  • AIRBUS Defence & Space Award for the best overall Bachelor degree in Aerospace Engineering