The Secure, Reliable, and Intelligent Systems (SRI) Lab is a research
group in the Department of Computer Science at ETH
Zurich.
Our research focuses on reliable, secure, and trustworthy machine learning, with emphasis on large language
models.
We currently study issues of controllability, security and privacy, and reliable
evaluation of LLMs, their application to mathematical reasoning and coding, as well as generative AI
watermarking, AI regulations, federated learning privacy, robustness and
fairness certification, and quantum computing.
Our work has led to six ETH spin-offs:
NetFabric (AI for systems),
LogicStar (AI code agents),
LatticeFlow (robust ML),
InvariantLabs (secure AI agents; acquired),
DeepCode (AI for code; acquired),
and ChainSecurity (security verification; acquired).
To learn more about our work see our Research page, recent Publications, and GitHub
releases. To stay up to date follow our group on Twitter.
Latest News
24.10.2025
Our work on sycophantic behavior in large language models was featured in a Nature article on the risks of LLM sycophancy in scientific research.
14.07.2025
SRI Lab is presenting 14 papers at ICML 2025 in Vancouver: 9 at the main conference and 5 at workshops. See the twitter thread for more details.
25.06.2025
Our ETH spin-off Invariant Labs was acquired by Snyk. See the article on the D-INFK news channel.
Most Recent Publications
Watermarking Autoregressive Image Generation
Nikola Jovanović, Ismail Labiad, Tomáš Souček, Martin Vechev, Pierre Fernandez
NeurIPS
2025
MathArena: Evaluating LLMs on Uncontaminated Math Competitions
Mislav Balunović, Jasper Dekoninck, Nikola Jovanović, Ivo Petrov, Martin Vechev
NeurIPS Datasets and Benchmarks
2025
qblaze: An Efficient and Scalable Sparse Quantum Simulator
Hristo Venev, Thien Udomsrirungruang, Dimitar Dimitrov, Timon Gehr, Martin Vechev
ACM OOPSLA
2025
Adaptive Generation of Bias-Eliciting Questions for LLMs
Robin Staab, Jasper Dekoninck, Maximilian Baader, Martin Vechev
ArXiv
2025