An in-person Cambridge-local meetup of the
Neural Information Processing Systems (NeurIPS) conference.
Neural Information Processing Systems (NeurIPS) conference.
The goal of this meetup is to bring students, researchers, and engineers from the greater Cambridge area (UK) together for an opportunity to meet and discuss machine learning research presented at NeurIPS. We also want to provide an opportunity for researchers to promote their work and meet people from their local community. The day will feature poster and panel sessions and in-person presentations.
Schedule
Friday, 6th of December, 2024
10:00 - 10:30
Registration and coffee
10:30 - 10:40
Welcome
10:40 - 12:00
Invited talks
10:40 - 11:00: John Bronskill
LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language
11:00 - 11:20: Davide Buffelli
Exact, Tractable Gauss-Newton Optimization in Deep Reversible Architectures
11:20 - 11:40: Aliaksandra Shysheya and Cristiana Diaconu
On conditional diffusion models for PDE simulations
11:40 - 12:00: Sattar Vakili
Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm
14:00 - 15:00
Invited talks
14:00 - 14:20: N'yoma Diamond
On the Ethical Considerations of Generative Agents
14:20 - 14:40: Israel Mason-Williams
Knowledge Distillation: The Functional Perspective
14:40 - 15:00: Hanna Foerster
Beyond Slow Signs in High-fidelity Model Extraction
15:00 - 15:45
Panel session
The future of AI policy: anticipating challenges and driving positive change
Panellists: Emily Shuckburgh, Jess Montgomery, Neil Lawrence, Mateja Jamnik
15:45 - 16:15
Coffee
16:15 - 17:00
Keynote seminar
Anders C Hansen, Dept of Applied Maths and Theoretical Physics, University of Cambridge
Talk title: The consistent reasoning paradox, hallucinations and fallibility of super AI: The power of 'I don't know'
Abstract: We introduce the Consistent Reasoning Paradox (CRP), which applies to any artificial super intelligence (ASI) (surpassing human intelligence). Consistent reasoning, at the core of logical reasoning, is the ability to handle questions that are equivalent, yet described by different sentences ('Is 1 > 0?' and 'Is one greater than 0?'). The CRP asserts that any ASI, because it must attempt to consistently reason, will always be fallible — like a human. Specifically, the CRP states that there are problems, e.g. in basic arithmetic, where any ASI that always answers and strives to reason consistently will hallucinate (produce wrong, yet plausible answers) infinitely often. The paradox is that there exists a non-consistently reasoning AI — which is not on the level of human intelligence — that will be correct on the same set of problems. The CRP also shows that detecting these hallucinations, even in a probabilistic sense, is strictly harder than solving the original problems, and that there are problems that an ASI may answer correctly, but it cannot provide a correct logical explanation for the answer. Therefore, the CRP implies that any trustworthy AI (i.e., an AI that never answers incorrectly) that also reasons consistently must be able to say 'I don't know'. Moreover, this can only be done by implicitly computing a new concept that we introduce, termed the 'I don't know' function — something currently lacking in modern AI. In view of these insights, the CRP provides a glimpse into the behaviour of ASI. An ASI cannot be 'almost sure', nor can it always explain itself, and therefore to be trustworthy it must be able to say 'I don't know’.
17:00 - 19:00
Networking event
with light snacks and drinks
The end
Organisers
Christian Cabrera-Jojoa
Zak Shumaylov
Jasmine Bayrooti
Pritthijit Nath
Hong Ye Tan
Sam Willis
Annabelle Scott
Chairs
Carl Edward Rasmussen
Neil Lawrence
Carl Henrik Ek
Carola-Bibiane Schönlieb
José Miguel Hernández Lobato
Location
Address:
University's West Cambridge site
15 JJ Thomson Avenue
Cambridge CB3 0FD
Rooms:
Ground Floor - Lecture Theatre 1 and The Street
Any Questions? Email:
as599 'at' cam.ac.uk
as599 'at' cam.ac.uk