We build models to make better decisions. We build explanations so we can trust those decisions. But what if the explanation is confidently pointing at the wrong thing? In this lecture I want to show that interpretability is not just a tool for transparency. It is a lens for catching when your model has learned something it should not have, and a bridge to causal reasoning that actually holds up in the real world.
In this seminar, I will motivate set-theoretic Neural Reasoning and present the first such neural network, the Sphere Neural Network, which achieves the rigour of symbolic-level Aristotelian syllogistic reasoning (the beginning of the history of logical reasoning) and its variants, through constructing a sphere configuration as an Euler diagram (semantic model). I will argue that, being limited by vector embeddings (spheres with zero radius), traditional Neural Reasoning (supervised deep learning) cannot achieve rigorous syllogistic reasoning.
This free event is open only to members of the University of Cambridge (and affiliated institutes). Please be aware that we are unable to offer consultations outside clinic hours.
If you would like to participate, please sign up as we will not be able to offer a consultation otherwise. Please sign up through the following link: https://forms.gle/Jx73BwGykJuem4wE7. Sign-up is possible from Mar 12 midday (12pm) until Mar 16 midday or until we reach full capacity, whichever is earlier. If you successfully signed up, we will confirm your appointment by Mar 18 midday.
Enrique Amigó (National University of Distance Education, Madrid, Spain)
In this talk (based on a book draft, see this link) I propose a unified formal framework for ground truth based evaluation metrics and task characterization grounded in measurement theory. Building on this foundation, I analyze the formal properties of existing metrics and organize them into families according to task characteristics. The book covers a wide range of discriminative tasks, including classification, ranking, clustering, and sequence labelling, among others, as well as text generation.
Users’ subjective experience of a technology’s transparency plays a pivotal role in human-computer interaction, shaping trust, satisfaction, and technology use. Moreover, as interactive systems become more autonomous and complex, industry and policy increasingly acknowledge users’ growing need to understand what a technology is doing, how it functions, and why it produces certain outcomes. Moving beyond the currently fragmented research landscape, this talk offers a comprehensive perspective on technology transparency.
Sarah Bowling, PhD. Assistant Professor in the Department of Developmental Biology at Stanford University School of Medicine
Speaker: Sarah Bowling, Ph.D.
Assistant Professor in the Department of Developmental Biology at Stanford University School of Medicine
Title: “How life finds a way: resilience in mammalian embryogenesis”
Abstract: TBC
Short bio: Dr. Sarah Bowling is an Assistant Professor in the Department of Developmental Biology at Stanford University School of Medicine. Her laboratory focuses on understanding the mechanisms governing resilience in mammalian embryogenesis - i.e. determining how embryos withstand and recover from diverse genetic and environmental perturbations.
When designing complex systems, we need to consider multiple trade-offs at various abstraction levels and scales, and choices of single components need to be studied jointly. For instance, the design of future mobility solutions (e.g., autonomous vehicles, micromobility) and the design of the mobility systems they enable are closely coupled. Indeed, knowledge about the intended service of novel mobility solutions would impact their design and deployment process, while insights about their technological development could significantly affect transportation management policies.
Professor Thomas G. Dietterich, School of EECS, Oregon State University
Exogenous state variables and rewards can slow reinforcement learning by injecting uncontrolled variation into the reward signal. In this talk, I’ll describe our work on formalizing exogenous state variables and rewards. Then I’ll discuss our main result: if the reward function decomposes additively into endogenous and exogenous components, the MDP can be decomposed into an exogenous Markov Reward Process (based on the exogenous reward) and an endogenous Markov Decision Process (optimizing the endogenous reward).
Gerado Duran-Martin, Oxford-Man Institute, University of Oxford
We propose a unifying framework for methods that perform probabilistic online learning in non-stationary environments. We call the framework BONE, which stands for generalised (B)ayesian (O)nline learning in (N)on-stationary (E)nvironments. BONE provides a common structure to tackle a variety of problems, including online continual learning, prequential forecasting, and contextual bandits.
In this talk, I will present CodeScaler, a novel framework designed to overcome the scalability bottlenecks of Reinforcement Learning from Verifiable Rewards (RLVR) in code generation. While traditional RLVR relies heavily on the availability of high-quality unit tests—which are often scarce or unreliable—CodeScaler introduces an execution-free reward model that scales both training and test-time inference.
Kevin Monteiro, Department of Computer Science and Technology
Sleep disorders, particularly insomnia, and mental health
conditions affect a significant fraction of adults worldwide, posing seriousmmental and physical health risk. Music therapy offers promising, low-cost, and non-invasive treatment, but current approaches rely heavily on expert-curated playlists, limiting scalability and personalisation. We propose a low-cost generative system leveraging recent advances in diffusion models to synthesize music for therapy. We focus on insomnia and curate a dataset of waveform sleep music to generate audio tailored to sleep.
Prof Isabelle Augenstein (University of Copenhagen)
Language Models (LMs) acquire parametric knowledge from their training process, embedding it within their weights. The increasing scalability of LMs, however, poses significant challenges for understanding a model's inner workings and further for updating or correcting this embedded knowledge without the significant cost of retraining. Moreover, when using these language models for knowledge-intensive language understanding tasks, LMs have to integrate relevant context, mitigating their inherent weaknesses, such as incomplete or outdated knowledge.
We warmly invite you to the C2D3 Computational Biology Annual Symposium 2026. This event is open to everyone in the Computational Biology Community.
https://www.c2d3.cam.ac.uk/events/comp-bio-2026
Early Career Researcher: Abstract Submission
We are inviting Early Career Researchers to present their research during the symposium. Talks should be 17 minutes each, and a short Q&A will follow. Abstract submission - Deadline 9am 1st April 2026.
Registrations
Registration is essential. A waitlist will open if capacity is reached. Registrations - Deadline 9am Monday 4th May 2026.
Scientific discovery emerges not from isolated reasoning, but from the intersection of diverse epistemic traditions. This talk proposes that the modern AI ecosystem, a structured network of heterogeneous reasoning agents spanning approximate and rigorous inference, constitutes a new form of collaborative intelligence for scientific inquiry. Drawing on Simon's conception of reasoning as adaptive search, we argue that such ecosystems do not merely accelerate known reasoning pathways, but create conditions under which genuinely novel representations may emerge.