Recently, Tübingen AI Center and the Cluster of Excellence “Machine Learning – New Perspectives for Science” organized the Tübingen Pre-ICLR Poster Event 2024. The workshop spotlighted nine recent machine learning papers from Tübingen which had been accepted since the Pre-NeurIPS event in December. About 40 machine learning specialists from Tübingen University and from the Max Planck Institutes for Intelligent Systems and Biological Cybernetics seized on the opportunity to mingle, discuss their posters and exchange their ideas, thus getting ready for the important International Conference on Learning Representations ICLR starting this week in Vienna.
The following posters were presented:
- John Kirchenbauer · Jonas Geiping · Yuxin Wen · Manli Shu · Khalid Saifullah · Kezhi Kong · Kasun Fernando · Aniruddha Saha · Micah Goldblum · Tom Goldstein: On the Reliability of Watermarks for Large Language Models, ICLR 2024
- André F. Cruz, Moritz Hardt: Unprocessing Seven Years of Algorithmic Fairness. ICLR 2024
- Yumeng Li · Margret Keuper · Dan Zhang · Anna Khoreva: Adversarial Supervision Makes Layout-to-Image Diffusion Models Thrive, ICLR 2024
- Vishaal Udandarao · Max F. Burg · Samuel Albanie · Matthias Bethge: Visual Data-Type Understanding does not emerge from Scaling Vision-Language Models, ICLR'24
- Siyuan Guo · Jonas Wildberger · Bernhard Schoelkopf: Out-of-variable Generalization for Discriminative Models, ICLR 2024
- Zhijing Jin · Jiarui Liu · Zhiheng LYU · Spencer Poff · Mrinmaya Sachan · Rada Mihalcea · Mona Diab · Bernhard Schoelkopf: Can Large Language Models Infer Causation from Correlation? ICLR 2024
- Takeru Miyato · Bernhard Jaeger · Max Welling · Andreas Geiger: GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers, ICLR2024
- Christian Gumbsch · Noor Sajid · Georg Martius · Martin V. Butz: Learning Hierarchical World Models with Adaptive Temporal Abstractions from Discrete Latent Dynamics, Spotlight Poster, ICLR 2024
- Sina Khajehabdollahi · Roxana Zeraati · Emmanouil Giannakakis · Tim Schäfer · Georg Martius · Anna Levina: Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks, ICLR 2024
- Tobias Weber, Emilia Magnani, Marvin Pförtner, Philipp Hennig: Uncertainty Quantification for Fourier Neural Operators, accepted at “Workshop on AI4DifferentialEquations In Science”, ICLR 2024
- Prasanna Mayilvahanan · Thaddäus Wiedemer · Evgenia Rusak · Matthias Bethge · Wieland Brendel: Does CLIP’s Generalization Performance Mainly Stem from High Train-Test Similarity?, ICLR 2024
- Karsten Roth · Lukas Thede · A. Sophia Koepke · Oriol Vinyals · Olivier Henaff · Zeynep Akata: Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained Model, ICLR 2024