Oliver Elbert - Computational Scientist, NOAA GFDL
The era of exascale supercomputing promises great advances in high-performance research computing, from increased simulation resolution and larger ensemble sizes to more realistic and complex models. To make the most of the next generation of HPC resources, however, we must re-engineer our applications to run efficiently on a variety of hardware architectures.
Dr Aleksej Zelezniak, Associate Professor, Chalmers University of Technology (Sweden) and King's College London
Dr Aleksej Zelezniak (5 March)
Associate Professor, Chalmers University of Technology (Sweden) and King's College London
Talk title: tbc
Hosted by: Susanne Bornelöv
I will present a family of sampling-based uncertainty measures that generalise surprisal and allow expressing a wider range of hypotheses about the workings of incremental language processing.
Abstract: Popularly known for its IDEs and for being behind the Kotlin language, JetBrains is also heavily involved in AI, not just by integrating external tools into its IDEs but also by developing its own. This talk will be beginner-friendly, helping students understand how we reached the current age of AI agents. And how they can both build and use them.
We will also look a bit into what JetBrains is doing and how you as a student can get involved with our organisation. And end the afternoon with a fun quiz, some food and cool merchandise!
P.S.
The ideas of aleatoric and epistemic uncertainty are widely used to reason about the probabilistic predictions of machine-learning models. We identify incoherence in existing discussions of these ideas and suggest this stems from the aleatoric-epistemic view being insufficiently expressive to capture all the distinct quantities that researchers are interested in. To address this we present a decision-theoretic perspective that relates rigorous notions of uncertainty, predictive performance and statistical dispersion in data. This serves to support clearer thinking as the field moves forward.
High-performance computing is becoming increasingly heterogeneous, from the spread of different GPU families, to the rise of new specialised accelerators, in particular related to the machine learning domain.
In this talk we will cover some of the tools available to do high-performance computing with the Julia programming language: from distributed computing with MPI, to accelerating numerical code on GPUs, with particular focus on vendor-agnostic solutions such as KernelAbstractions.jl, to ensure portability.
Modern computers are collections of heterogenous components, including GPUs, TPUs, NPUs, FPGAs and other devices that carry out computing tasks but which are not the central CPU. We are proposing novel methods of program compilation, transformation and scheduling that take advantage of the entire system so that computation takes place in the most appropriate place at the most propitious time.
This free event is open only to members of the University of Cambridge (and affiliated institutes). Please be aware that we are unable to offer consultations outside clinic hours.
If you would like to participate, please sign up as we will not be able to offer a consultation otherwise. Please sign up through the following link: https://forms.gle/Jx73BwGykJuem4wE7. Sign-up is possible from Mar 12 midday (12pm) until Mar 16 midday or until we reach full capacity, whichever is earlier. If you successfully signed up, we will confirm your appointment by Mar 18 midday.
Sarah Bowling, PhD. Assistant Professor in the Department of Developmental Biology at Stanford University School of Medicine
Speaker: Sarah Bowling, Ph.D.
Assistant Professor in the Department of Developmental Biology at Stanford University School of Medicine
Title: “How life finds a way: resilience in mammalian embryogenesis”
Abstract: TBC
Short bio: Dr. Sarah Bowling is an Assistant Professor in the Department of Developmental Biology at Stanford University School of Medicine. Her laboratory focuses on understanding the mechanisms governing resilience in mammalian embryogenesis - i.e. determining how embryos withstand and recover from diverse genetic and environmental perturbations.
When designing complex systems, we need to consider multiple trade-offs at various abstraction levels and scales, and choices of single components need to be studied jointly. For instance, the design of future mobility solutions (e.g., autonomous vehicles, micromobility) and the design of the mobility systems they enable are closely coupled. Indeed, knowledge about the intended service of novel mobility solutions would impact their design and deployment process, while insights about their technological development could significantly affect transportation management policies.
Professor Thomas G. Dietterich, School of EECS, Oregon State University
Exogenous state variables and rewards can slow reinforcement learning by injecting uncontrolled variation into the reward signal. In this talk, I’ll describe our work on formalizing exogenous state variables and rewards. Then I’ll discuss our main result: if the reward function decomposes additively into endogenous and exogenous components, the MDP can be decomposed into an exogenous Markov Reward Process (based on the exogenous reward) and an endogenous Markov Decision Process (optimizing the endogenous reward).
Kevin Monteiro, Department of Computer Science and Technology
Sleep disorders, particularly insomnia, and mental health
conditions affect a significant fraction of adults worldwide, posing seriousmmental and physical health risk. Music therapy offers promising, low-cost, and non-invasive treatment, but current approaches rely heavily on expert-curated playlists, limiting scalability and personalisation. We propose a low-cost generative system leveraging recent advances in diffusion models to synthesize music for therapy. We focus on insomnia and curate a dataset of waveform sleep music to generate audio tailored to sleep.
Prof Isabelle Augenstein (University of Copenhagen)
Language Models (LMs) acquire parametric knowledge from their training process, embedding it within their weights. The increasing scalability of LMs, however, poses significant challenges for understanding a model's inner workings and further for updating or correcting this embedded knowledge without the significant cost of retraining. Moreover, when using these language models for knowledge-intensive language understanding tasks, LMs have to integrate relevant context, mitigating their inherent weaknesses, such as incomplete or outdated knowledge.