PhD opportunities in Mathematical Sciences

All our PhD programmes in Mathematical Sciences, Actuarial Mathematics and Statistics (AM&S) are shared with the University of Edinburgh. Indeed the Mathematics and AM&S departments at Heriot-Watt form, together with the University of Edinburgh Mathematics department, the Maxwell Institute for Mathematical Sciences.

Information on all our programmes, funding opportunities and on how to apply can be found on the Maxwell Institute Graduate School (MIGS) website. PhD opportunities in mathematical physics, algebra geometry and topology can be found within the CDT  Algebra, Geometry and Quantum Fields.

A description of the research areas we cover within Mathematics and AM&S can be found on our Research in Mathematical Sciences page. 

Below you will find a description of many of the PhD projects we offer organised by research theme:

Studentships in Mathematics and Statistics Innovation

The deadline for applications is midnight on 28th February 2025. Applications may re-open if all positions are not filled.

Title: EPSRC Doctoral Studentships in Mathematics and Statistics Innovation

Organisation Name: Heriot Watt University

About our Organisation: Heriot-Watt University has established a reputation for world-class teaching and leading-edge, relevant research, which has made it one of the top UK universities for innovation, business and industry.

Heriot-Watt University has five campuses: three in the UK (Edinburgh, Scottish Borders and Orkney), one in Dubai and one in Malaysia. The University offers a highly distinctive range of degree programmes in the specialist areas of science, engineering, design, business and finance. Heriot-Watt is also Scotland's most international university, boasting the largest international student cohort.

Successful candidates will be affiliated within the School Mathematical and Computer Sciences in Edinburgh and will be working closely with projects stemming from the newly established Mathematics-Driven Innovation Centre (M-DICE).

The Opportunity: EPSRC Doctoral PhD studenships roles are funded by the Engineering and Physical Sciences Research Council (EPSRC) as part of Heriot-Watt University’s Doctoral Training Partnership award.

EPSRC Doctoral Studentships in Mathematics and Statistics Innovation, are fully funded PhD studentships (please check eligibility below) aiming to interface different areas of EPSRC remit within Mathematical Sciences, including Statistics and Data Science. The successful candidates will be working on specific project(s) within the Centre for Mathematics-Driven Innovation, aiming to address industrial and/or applied sciences challenges by employing cutting-edge mathematical and statistical modelling. The duration of the funding is 3.5 years per PhD student.

Specifically, the following potential PhD projects (with short descriptions) are available for starting in September 2025 are given below

Eligibility Essentials: The following eligibility criteria are essential for an application to be evaluated.

Academic conditions:

To receive EPSRC studentship funding, you must have qualifications or experience equal to an honours degree at a first or upper second class level, or a masters from a UK academic research organisation.

Degree qualifications gained outside the UK, or a combination of qualifications and experience that is equivalent to a relevant UK degree, might be accepted in some cases.

Residential eligibility criteria:

Studentships for this call are limited to home students only and will receive a full award of stipend and fees at the home level.

To be treated as a home student, candidates must meet one of these criteria:

  • be a UK national (meeting residency requirements)
  • have settled status
  • have pre-settled status (meeting residency requirements)
  • have indefinite leave to remain or enter.

See the UKRI terms and conditions of training grants for full details.

How to apply

Deadline: The deadline for applications is midnight on 28th February 2025. Applications may re-open if all positions are not filled.

The application process for EPSRC Doctoral Studentships in Mathematics and Statistics Innovation is centred around available research areas/projects, henceforth collectively termed simply as “Projects”. Each Project designates an academic Supervisor and, in some cases, one or more Co-supervisors. Informal enquiries about a project can be addressed to the project’s Supervisor. Enquiries about the application procedure can be addressed to pgadmissions@hw.ac.uk

Each applicant may apply to a maximum of two Projects.

Applicants should apply through the HWU Postgraduate Application Portal for a PhD in Mathematics. Applicants should mention they are applying for EPSRC Doctoral Studentships in Mathematics and Statistics Innovation and state the project(s) they are interested and the Supervisor in the respective fields in the Application Form.

Shortlisted candidates will be invited for an interview. It is anticipated that shortlisted candidates will be invited to interview in March 2025. Successful candidates will be notified as soon as possible thereafter. Applications may reopen in July if not all positions are filled.

All projects have a non-academic/industrial component of varying degree and industrial co-supervisor. It is not possible disclose any specific companies/organisations related to a project at the application stage. Information on specific companies/organisations related to a project may be given during interviews.

Available Projects

Image Classification of Super-resolution ultrasound prostate cancer maps

Supervisor(s): M. Vallejo (MACS, supervisor) & V. Sboros (EPS co-supervisor)

Description: Prostate cancer is a disease with high incidence, high mortality and a high rate of avoidable intervention. It has the highest incidence of cancer in men, with the second highest mortality rate. It is known that the current diagnostic pathway misses up to 22% of significant cancers, leading to nearly 60% of invasive procedures that may be avoided. It is also established that more tumours need to be detected (diagnostic sensitivity) and better classified and localised (specificity) to improve these figures and inform treatment. This relies on the development of better and widely affordable imaging techniques, like super-resolution ultrasound. 

In this project, we will develop machine learning techniques to characterise and classify super-resolution ultrasound images to support more successful prostate cancer detection, especially at its early stage. A combination of medical images with images obtained from numerical simulations of the mathematical model for cancer growth and growth-induced angiogenesis based on characteristics of the blood vessel network will be used to train the machine learning algorithms. The mathematical models will combine partial differential equations for chemical dynamics in a cell tissue with a discrete description of the blood network. The existing mathematical models will be extended to address specific properties of prostate cancer. 

References:

  • Sboros et al. Ultrasound Med Biol 2011, 37.
  • Chaplain et al. In Molecular, Cellular,Tissue Level Aspects & Implications, Ed Jackson 2011,167.   
  • Machado et al, Microcirculation 2011, 18.
  • Wu et al. J Theor Biol 2014, 355.
  • Kanoulas et al. Invest Radiol 2019, 500.
  • Papageorgiou et al. IEEE IUS Symp 2022.

Mathematical Modelling of Wave Energy Converters

Supervisor(s): C. Cummins (MACS, supervisor) 

Description: When you hold a seashell to your ear, the 'sound of the sea' you hear is due in part to a phenomenon known as Helmholtz Resonance (HR). This same effect can be heard in a car driving on a motorway with one window slightly open. While these are acoustic examples, a similar resonance happens in water. Harbours, for instance, can face catastrophic damage when the frequency of incoming waves have the same fundamental frequency as the harbour – the Helmholtz mode.

Recently, we discovered that a particular class of device designed to harness wave energy, specifically the wave energy converters (WECs) developed by our project partner, uniquely exhibit this HR. However, the potential power from this resonance has not been fully utilised, because of two main issues. First, there is a lack of deep understanding of this resonance in WECs. Second, “viscous losses” weaken this resonance, much like how placing your hand near the open window arch in the car reduces the thudding sound. The mathematical modelling of viscous losses using computational fluid dynamics (CFD) is computationally challenging, requiring supercomputers and taking many days to complete, which makes it difficult to understand how to mitigate these viscous losses when designing new WECs.

In this project, we will develop a new and efficient mathematical model that takes into account these viscous losses1, but which is several orders of magnitude faster than CFD and does not require the use of supercomputers. While our primary focus is on improving the performance of WECs while they undergo HR, the method is entirely general, so it has a much broader application and it can be used for any underwater structure, not just WECs. By understanding and mitigating these effects, we aim to boost the efficiency of WECs, reducing their costs. Our plan is to share this method with the wider marine engineering community, by creating an open-source code that can benefit numerous marine applications1,2..

References

  1.  Cummins, C. P. & Dias, F. A new model of viscous dissipation for an oscillating wave surge converter. J. Eng. Math. 103, 195–216 (2017).
  2. Cummins, C. P., Scarlett, G. T. & Windt, C. Numerical analysis of wave-structure interaction of regular waves with surface-piercing inclined plates. J. Ocean Eng. Mar. Energy 8, 99–115 (2022).
  3. Ancellin, M. & Dias, F. Capytaine: a Python-based linear potential flow solver. J. Open Source Softw. 4, (2019).

Statistical learning for quantifying meteorological event-related risks

Supervisor(s): G. Tzougas (MACS, supervisor), G. Streftaris (MACS, co-supervisor)

Description: Quantifying meteorological event-related risks has become increasingly important in general insurance as extreme climate events may trigger excess claims that can potentially have detrimental impact on the insurer’s portfolio. On the other hand, it is challenging to model the relation between climate events and claim frequencies, since detailed information on climate events is often not fully recorded. Motivated by the above issues, in this project we will model the number and the cost of claims to characterize meteorological event-related risks.

Multivariate Spatiotemporal Hybrid Neural Networks Regression Models

Supervisor(s): G. Tzougas (MACS, supervisor), G. Streftaris (MACS, co-supervisor)

Description: In this project, we propose a novel approach to modeling multivariate claim frequency data with dependence structures across the claim count responses, which may exhibit different signs and ranges, as well as overdispersion due to unobserved heterogeneity. We will analyze claim rates within a property insurance portfolio of an insurance company, particularly prompted by extreme weather-related events in Greece.

Key drivers in cancer morbidity and mortality disparities: past and future

Supervisor(s): G. Streftaris (MACS, supervisor), G. Tzougas, A. Arik (MACS, co-supervisors)

Description: The aim of this project is to investigate cancer incidence and mortality rates in various sub-national groups, based on demographic/ socio-economic factors (e.g. ethnicity, education, deprivation, country of origin). The research will address whether widening socio-economic differences in some cause-specific deaths are related to are related to migration or other demographic factors by using advanced statistical and machine learning modelling.

Landscape management and the pace of nature recovery: multiscale modelling and simulation

Supervisor(s): Michela Ottobre (MACS, supervisor), Christina Cobbold (University of Glasgow, co-supervisor) Emma Gardner (UK Center for Ecology & Hydrology, industrial co-supervisor)

Description: Human activity has a tremendous impact on shaping our landscapes and the habitats within them. If the rate of landscape change is too fast, for example as a consequence of changing management strategies,  species may not be able to adapt, with consequent biodiversity loss. In this project we will consider this issue for structured, inhomogeneous landscapes, where different patches of land may be used for different purposes (for example mixed rewilding/agricultural). From a mathematical perspective this will entail the use of (stochastic) multiscale modelling and analysis, on spatially inhomogeneous models. 

Holistic Quantitative Risk Management Strategies with Application to Flood Risk Management

Supervisor(s): A. Chong (MACS, supervisor),

Description: Despite tremendous effort being spent, the recent fell short of reducing greenhouse gas emissions to the targets in many countries are evident. If this is going to lead us to a new norm, with more frequent and severe flooding, drought, tropical cyclone, wildfire, and so on, are we prepared to adapt? With limited budget and resources being available, who shall be responsible for enhancing resilience and how? This project aims to develop holistic quantitative risk management strategies to scientifically answer these questions via a probabilistic and game-theoretic approach. The developed theory shall then be applied to flood risk management, and in particular shall provide a systematic analysis on the cost-and-benefit of property flood resilience.

Projects in Data and Decisions

Climate change, mortality and pensions

Description: The aim of this PhD proposal is to provide insights into the crucial importance and impact of the climate change on the solvency of pension plans, through the modelling of mortality and morbidity rates. The pension sector is extremely large, being valued at GBP13.9 billion in 2021. Pensions are critical to enable people to pay for food, rent and other daily needs when they stop working.

Current mortality and morbidity models used in industry rely on, for example, the age of the individuals and evolution of ageing over time. The models need to be adjusted to consider the effect of the global warming and extreme natural events. There have been very few academic papers written on this topic either. So more sophisticated models must be developed in the pension sector to incorporate climate change risks.

This is important because pension plans play a vital role in allowing individuals and society to manage climate change risk and making net zero a reality. Due to the long-term nature of their liabilities and vast sums of money invested in pensions, the pension sector can invest their assets in long-term, transformative infrastructure projects, to support the transition to a net zero world.

The objectives of this industrial collaboration and PhD project will be the following:

  • To determine what climate change risks should be included in models of mortality and morbidity;
  • To forecast mortality and morbidity rates by including climate change risks in these models.
  • To estimate the financial impact of climate-related risks on pension plans under stressed scenarios.
  • To identify new opportunities for pension plans under climate change risk: what should they do to address the challenges of climate change? Specifically, potential risk management solutions will be analysed.

Supervisors: Carmen Boado-Penas and Catherine Donnelly

Industrial supervisor: Scott Eason (Partner at BW and Head of Insurance and Longevity Consulting) and Kim Durniat (Partner and Head of Life Consulting) have agreed to become members of the thesis supervisory team.

Barnett Waddingham (BW) is a leading independent UK professional services consultancy across risks, pensions, investment and insurance.

Deep Learning Methods for Credit Risk Models. 

Supervisor: Wei Wei 

Description:  In this project you will explore the development of deep learning methods for credit risk models. It requires developing pricing and calibration methods for nonlinear models in credit risk. Techniques that would be applied here include semi-linear parabolic partial differential equations, backward stochastic differential equations, and deep learning algorithms for high dimensional optimization problems. At the end of the project, you are expected to have a broad view on general analytical and computational tools for credit risk models. 

Stochastic control methods for quantitative behavioural finance. 

Supervisor: Wei Wei 

Description: Behavioural finance is the study of the influence of human emotions and psychology on financial decision making. While psychological factors are involved, decision making problems become time-inconsistent, in that any optimal rule obtained today may no longer be optimal from the perspective of a future date. In this project, you will explore the methodological developments of stochastic control to tame the time-inconsistency arising from quantitative behavioural finance models. It will require developing the time-inconsistent stochastic control theory and designing efficient numerical method to analyze the problems in behavioural finance. 

Factors influencing the time to disease fade-out. 

Supervisor: Damian Clancy 

Description: The spread of infectious disease through a population is an inherently random process, and can be studied using stochastic models. For diseases which become endemic in a population, one object of interest is the time until fade-out of infection (a random variable). The expected time to fade-out may be computed straightforwardly through Monte-Carlo simulation, or more exactly from general Markov process theory. For more complicated models, implementing these approaches becomes less straightforward, and approximation methods may also be needed. In this project, you will investigate a variety of approaches, with the aim of understanding the effects of particular disease features upon time to fade-out. There are many different features of different diseases that you could study - for instance, you might examine the impact of environmental transmission upon disease persistence, or the effects of changes in the birth and death rates of the susceptible population.   

References:  

"Approximating time to extinction for endemic infection models" by Damian Clancy and Elliott Tjia (2018), Methodology and Computing in Applied Probability volume 20, pages 1043–1067 10.1007/s11009-018-9621-8 

"The Influence of Latent and Chronic Infection on Pathogen Persistence" by A. O'Neill, A. White, D. Clancy, F. Ruiz-Fons & C. Gortázar (2021), Mathematics volume 9, article number 1007 https://doi.org/10.3390/math9091007 

Longevity risk management

Supervisor: Andrew Cairns

Description: Pension plans and life insurers are exposed to longevity risk: the risk that pensioners, in aggregate, live longer than anticipated. This has caused these institutions to look at ways to manage this risk. This project will look at (a) the models to measure the underlying risk; (b) innovative ways to manage the risk; (c) use stochastic models to assess the effectiveness of the different risk management solutions.

Cause of death measurement and modelling

Supervisor: Andrew Cairns

Description: Recent years have seen a huge increase in the availability of mortality data by cause of death rather than just all cause mortality (see, for example, www.causesofdeath.org). The use of cause-of-death data gives us greater insight into the past (e.g. drivers of past mortality improvements) as well as the future (e.g. which causes are likely to drive future all-cause mortality improvements?). This presents us with new challenges: what is the most effective way to model this data to gain the best insights?

Peer-to-Peer Risk Sharing

Description: Risk sharing is the core of insurance, or in general, of risk management. Traditional insurance products are built upon centralized models, where an insurer is the central node which establishes a bilateral risk sharing agreement to each of its policyholders. In the last decade, a modern decentralized insurance model, the so-called peer-to-peer (P2P) insurance, has been rapidly developing in the industry by, for example, Friendsurance, Inspeer, and Lemonade. This has recently sparked fundamental researches on the risk sharing mechanism for the P2P insurance. This project shall first review the recent literature development on P2P risk sharing, then advance theoretical foundation of P2P risk sharing, and finally compare it with classical centralized models.

Supervisor: Alfred Chong

Applications of Reinforcement Learning in Insurance

Description: Model-based solutions have been well developed in various topics of insurance. However, these solutions naturally suffer from any model miscalibration and/or misspecification. If a model is miscalibrated and/or misspecified, without retuning the model timely and swiftly, a model-based solution will possibly lead to losses, if not catastrophic consequences, for an insurance company. Reinforcement learning (RL), a flourishing sub-field in machine learning, has already proved its automated powerfulness in a wide range of non-actuarial tasks resembling human intelligence. Inspired by Chong et al. (2021, 2022), in which RL is applied to derive self-revising hedging strategies for variable annuity contracts, this project shall explore more applications of reinforcement learning in insurance.

Supervisor: Alfred Chong

Forward Preferences in Insurance

Supervisor: Alfred Chong

Description: Forward preferences, pioneered by Musiela and Zariphopoulou (2007), were developed to address limitations in classical utility maximization problems. Classical problems typically fix a priori the horizon of interest, the model of dynamics, and the agent's future utility function. These assumptions deviate from insurance practice, particularly due to the long horizons of insurance products. Insurance often involves random events, such as future lifetime, and mortality models may be revised based on updated health information. Inspired by Chong (2019), and Ng and Chong (2024), this project will fundamentally revisit actuarial topics that are based on classical utility maximization problems, within the forward framework. This will help shed light on the pros and cons of both the classical backward and novel forward models.

References:

  • Musiela M. and Zariphopoulou T. (2007) Investment and valuation under backward and forward dynamic exponential utilities in a stochastic factor model. Advances in Mathematical Finance 2007, 303-334.
  • W. F. Chong (2019). Pricing and hedging equity-linked life insurance contracts beyond the classical paradigm: the principle of equivalent forward preferences. Insurance: Mathematics and Economics 88, 93-107.
  • K. T. H. Ng and W. F. Chong (2024). Optimal investment in defined contribution pension schemes with forward utility preferences. Insurance: Mathematics and Economics 114, 192-211.

Efficient computation of Rare-risk measures

Description: Certain rare events have high cost, both humanitarian and financial, which make them significant events that industries and governments must plan for. Taking measures to reduce or mitigate the risks of such events is the goal of risk management which requires accurate assessment of such risks. This project's goal is to speed up computations of accurate risk measures of rare events to ensure effective risk management. The goal will be achieved by developing novel computational methods which utilize approximation properties of the underlying stochastic models and which are based on Monte Carlo and random sampling methods which are easily parallelizable and fully exploit the increased availability of computational resources.

Supervisor: Abdul-Lateef Haji-Ali

Hierarchical Methods for Chaotic Systems

Description: Chaotic systems appear in weather, ocean circulation and climate models, and incur enormous computational cost with currently available methods due to accuracy requirements imposing slow time-stepping and fine space-discretization. In this project, we will develop hierarchical methods to speed up uncertainty quantification (UQ) of such systems which will allow practitioners to conduct more thorough statistical studies that will ultimately result in better decision making.

Supervisor: Abdul-Lateef Haji-Ali

Bilevel optimisation for inverse problems: analysis, fast computations, and Bayes.

Description: Inverse problems concern the estimation of parameters of mathematical models given real world data: we estimate the permeability of a groundwater reservoir using measurements of the hydrostatic pressure in the reservoir, we reconstruct the position and shape of a tumour using attenuated X-rays in medical imaging, and we train the weight and bias matrices in a deep neural network that aims at distinguishing cats and dogs. Inverse problems are usually not uniquely solvable or their solution is brittle with respect to small perturbations in the data -- they are ill-posed.

Two ways that can often overcome ill-posedness are regularisation on the one hand and the Bayesian approach on the other hand. The regularisation approach consists in minimising a certain functional that is a sum of the negative log-likelihood of the observed data given the unknown parameter and an additional term that has favourable properties. The Bayesian approach is probabilistic: we model the unknown parameter as a random variable being distributed according to the so-called prior distribution. Using the aforementioned likelihood and Bayes' formula, we can then obtain the posterior distribution - that is the conditional distribution of the parameter given the data observation. The posterior can be used for point estimation and uncertainty quantification.

Regulariser and prior can have a large influence on the solution of the inverse problem and an appropriate choice is hard. In bilevel optimisation, we aim to `learn' the regulariser based on available data. Such a parameter can be a simple prefactor of a usual regulariser [Reyes et. al; Journal of Mathematical Imaging and Vision 57: 1–25 (2017)] that needs to be determined or the regulariser can be completely determined by a neural network [Mukherjee et al.; ArXiv 2008.02839 (2020)], where then weights and biases need to be learned.

We commence this project by looking at the method by [Antil et al.; Inverse Problems, 36: 064001 (2020)], which presents an interesting method to estimate the fraction of a fractional Laplacian that is used for regularisation. Here, the primary goal is to find a more scalable version of this algorithm throughmodern techniques in numerical linear algebra, allowing us to reconstruct large scale medical images. Future work may include: bilevel optimisation of prior distributions in Bayesian inversion, learning of sparse dictionaries through optimisation on manifolds, other fractional operators (such as total variation), and a continuous-time analysis of stochastic gradient descent in bilevel optimisation [Jin et al.; ArXiv 2112.03754 (2021)].

Supervisors: Dr Jonas Latz, Dr Abdul-Lateef Haji-Ali 

Statistical learning for quantifying meteorological event-related risks

Supervisor(s): Dr G Tzougas, Prof G Streftaris

Description:Quantifying meteorological event-related risks has become increasingly important in general insurance as extreme climate events may trigger excess claims that can potentially have detrimental impact on the insurer’s portfolio. On the other hand, it is challenging to model the relation between climate events and claim frequencies, since detailed information on climate events is often not fully recorded. Motivated by the above issues, in this PhD project we will use compound frequency and severity statistical models, together with copula-based models for the number and the cost of claims to characterize meteorological event-related risks. The proposed models have the capacity to uncover the joint distribution of the event and claim processes, also in cases where the observed data are incomplete. Bayesian methodology will be used to quantify the associated uncertainty, and we will also consider extensions based on deep learning techniques for capturing non-linearities in the data. Geospatial information will be included, to assess potential impact on the meteorological event and claim frequencies. Finally, the project will investigate possible negative intrinsic dependencies between meteorological events and per-event claim frequencies, which can imply that an insurance company may enjoy diversification benefits from climate change that causes more meteorological events.

Bayesian predictive modelling for cancer risk: key drivers in cancer morbidity and mortality disparities

Supervisor(s): Prof G StreftarisDr G. Tzougas

Description: The principal aim of the proposed research is to develop, evaluate and assess models for investigating cancer incidence and mortality rates in various sub-national groups, based on demographic/ socio-economic factors (e.g. ethnicity, education, deprivation, country of origin), under statistical and machine learning frameworks that allow for uncertainty quantification. The proposed work will address the timely need to develop robust predictive models for rapidly changing morbidity risks and the relevant impact on health-related insurance. Earlier work has shown that morbidity and health-insurance-related rates are idiosyncratic to a number of factors, including demographic, socio-economic and policy-linked characteristics. The proposed project will build on this work to identify key drivers in cancer morbidity and mortality, also relating to insured populations. We will also assess the robustness of the developed predictive models aiming to optimise both the interpretability and predictive quality for risks associated with certain medical morbidity causes.

Projects in Structure and symmetry - Mathematical Physics

Renormalisation interfaces in two-dimensional quantum field theory

Supervisor: A. Konechny

Description: Renormalisation group (RG) is as fundamental concept in Quantum Field Theory (QFT) that describes how the physics changes under the change of energy scale. A typical renormalisation group trajectory starts from one fixed point and drives the theory to a different fixed point. In two dimensions the fixed points are described by conformal field theories that possess an infinite-dimensional symmetry algebra. While a lot is known about the end points of renormalisation group flows very little is known about the global structure of the space of flows linking the end points. The project is centred around the use of renormalisation domain walls or renormalisation interfaces. In two dimensions this object is a line of contact between two different conformal field theories on each side. The project involves studying such objects for concrete  RG flows both analytically and numerically. The aims are to learn how to construct such objects, what information they encode about the RG flows and how could they be used in gaining control over the space of flows.

Topological interfaces and renormalisation group flows

Supervisor: A. Konechny

Description: Topological interfaces can be thought  of as generalisations of conserved charges  in quantum field theories. In the context of two-dimensional conformal field theories (CFTs) the set of topological interfaces forms an interesting algebraic structure called fusion  category. This structure proved to be a useful tool in analysing various aspects of 2D CFTs.  Moreover in certain situations they can be used to constrain RG flows triggered by perturbing a 2D CFT by relevant operators. The project aims at studying this type of  constraints for particular RG flows and developing new methods of deriving such constraints. 

Twistors, quantum Donaldson-Thomas invariants and dispersionful integrable systems

Supervisor: R. Szabo

Description: This project develops a novel geometric framework for understanding the twistor geometry underlying quantum Donaldson-Thomas invariants and dispersionful integrable systems, based on the Moyal-deformed version of Plebanski's second heavenly equation for self-dual gravity.

Instantons and Donaldson-Thomas theory on Calabi-Yau 4-folds

Supervisor: R. Szabo

Description: This project explores the computation of Donaldson-Thomas invariants of toric Calabi-Yau 4-folds by enumerating BPS states in an 8-dimensional cohomological gauge theory. Specific goals are to extend the known calculations beyond flat space and local orbifolds to more general curved backgrounds, and to study the moduli space structures of these theories including their wall-crossing behaviour.

Homotopical descriptions of higher-form symmetries

Supervisor: R. Szabo

Description: This project explores various mathematical structures which underpin higher-form symmetries and their symmetry topological field theories (SymTFTs) using techniques based on groupoids of field configurations, the modern homotopical incarnation of the Batalin-Vilkovisky formalism based on factorisation algebras, and differential cohomology.

Braided homotopy algebras and noncommutative field theories

Supervisor: R. Szabo

Description: Several projects are available under this general theme, which formulates noncommutative field theories with braided symmetries in terms of a braided version of the Batalin-Vilkovisky formalism. Among the goals is to reach a novel homotopy double copy realisation of twisted noncommutative gravity in terms of noncommutative Yang-Mills theory.

Are regular polygons optimal in relativistic quantum mechanics? 

Supervisor: Lyonell Boulton  

Description: Among all polygons with the same perimeter, regular polygons are known to minimise the ground energy of the Dirichlet Laplacian. This is also the case, if we consider the area as the fixed geometrical invariant rather than the perimeter. In the language of quantum mechanics, this can be re-interpreted by saying that the non-relativistic Schrodinger operator on boxes with the same perimeter or area, attains its minimal energy when the box has a regular base. These results can be traced back to the work of Polya in the 1950s and, arguably, they have enabled the rise to a whole new area of research in Geometrical Spectral Theory which is still an active subject of enquiry with strong links to Mathematical Physics. 

The aim of this PhD project is to develop research in the following direction. Suppose that we replace the Schrodinger operator with the free Dirac operator and pose the same question now in the relativistic setting. Will the box with a regular base be the optimal minimising energy shape (in absolute value) among all others? 

Recent progress on this problem include the paper [1], which considers general regions and [2], where this question is posed, but not solved. It appears that, even when the region is rectangular, an answer as to whether the square is indeed the optimal shape is not so easy to tackle. 

A concrete  initial stage of the project will be the analytic investigation of the problem on quadrilateral regions, trying to identify a shape for which the problem can be reduced and treated with exact formulas. Depending on progress we might consider numerical investigations along the lines of [3], in order to gain an insight on further lines of enquiry. The initial phase of the project might lead towards different directions, including the addition of magnetic or electic fields. 

References:  

[1] Ann. Henri Poincaré 19, 1465–1487 (2018) 

[2] J. Math. Phys. 63, 013502 (2022) 

[3] App. Num. Math. 99, 1–23 (2016) 

Projects in Structure and Symmetry - Algebra, Geometry, Topology

Solving equations in groups 

Supervisor: Laura Ciobanu 

Description: Imagine an equation of the form XaYYbZZc=1 in a group G, where X,Y, Z are variables and a, b, c some elements in G. Does this equation have solutions, and if it does, what are they? The answer depends very much on the group, whether it is free, hyperbolic, nilpotent or some other type. In some cases these questions, for arbitrary equations, are unsolvable, in other cases they are well understood but quite difficult. This project would revolve around understanding equations in nilpotent groups, and the base case would be the 3x3 Heisenberg group, where very little is known in terms of describing the solutions to an equation. Alternatively, depending on the background of the applicant, it could involve equations in some groups acting on rooted trees, such as the Grigorchuk group. 

This project brings together group theory, combinatorics, computational complexity, and possibly some algebraic geometry and formal languages, and it can be treated theoretically or rather computationally. 

Counting geodesics in groups 

Supervisor: Laura Ciobanu 

Description: To each finitely generated group one can attach the Cayley graph: a graph whose vertices are the group elements, and an edge connects two vertices if these are related via multiplication by a generator. If one counts all the geodesic, or shortest, paths, between the identity element/vertex and vertices at distance n from the identity, then one is looking at the geodesic growth function of the group. Much is known about this function, but a lot is also left to explore. For example, does there exist a group where this function is algebraic, but not rational? Or does there exist a group where this function is bigger than polynomial, but less than exponential? 

This project brings together group theory, combinatorics, formal languages, and computational experiments. 

The homology groups of a class of \'etale groupoids 

Supervisor: Mark Lawson 

Description: Etale groupoids are interesting to a wide circle of people including group theorists, C*-algebra theorists and  those of us working in inverse semigroup theory. The \'etale groupoids of particular interest are those whose space of identities are compact Hausdorff $0$-dimensional spaces. 

These include, of course, the \'etale groupoids whose space of identities is the Cantor space. There has been some work on the integer homology groups of such groupoids with much of it focussed on the zeroth and first homology groups. The ultimate aim is to classify those \'etale groupoids which are effective and minimal. It seems very likely that the integer homology groups will play a role in any such classification. What more can be said about these homology groups? Can we use non-commutative Stone duality to help us understand them better? 

A knowledge of topology is essential for this project together with a strong background in algebra. 

Applications of MV-algebras to a class of Boolean inverse monoids\'etale groupoids. 

Supervisor: Mark Lawson 

Description: MV-algebras generalize Boolean algebras and come originally from multiple valued logic (whence the MV). 

By work of Lawson, Scott and Wehrung, it is known that all MV-algebras can be  coordinatized by means of suitable Boolean inverse monoids: specifically, those which are factorizable and satisfy what is termed the lattice condition. This suggests that the theory of MV-algebras should be applicable to this class of Boolean inverse monoids. In particular, it suggests that there might be a sheaf representation of such Boolean inverse monoids. To date, very little MV-algebra theory has been applied to the study of this class of Boolean inverse monoids. But such an application could lead to some very interesting geometry. A strong background in algebra is essential for this project. 

The geometry of Artin groups 

Supervisor: Alexandre Martin 

Description: Artin groups form a class of groups generalising braid groups and with strong connections with Coxeter groups. Unlike Coxeter groups however, the structure and geometry of Artin groups are still mysterious in full generality. Certain classes of Artin groups are better understood, and this often comes from the existence of well-behaved actions on non-positively curved spaces (hyperbolic, CAT(0), etc.)  

The role of this project would be to study such actions, and to construct new ones, in order to reveal more of the geometry (in particular, non-positively curved features) and the structure (subgroups, automorphisms, etc.) of Artin groups.  

Combination problems in non-positive curvature 

Supervisor: Alexandre Martin 

Description: When studying a group G acting on a simplicial complex, one can think of this action as a way to decompose G into smaller "pieces" (the stabilisers of simplices), glued together via the combinatorics of the action. A natural question to ask is then the following: If all stabilisers satisfy a given property (P), under what conditions  (on the geometry of the complex acted upon, the dynamics of the action, etc,) can we conclude that the group G itself satisfies this property (P)?  

Such "combination problems" have been extensively studied for groups acting on trees, but fewer results are known in higher dimension. The goal of this project would be to study such problems for groups acting on higher dimensional complexes such as CAT(0) cube complexes and polygonal complexes, for various classes of properties (hyperbolicity, Tits alternative, etc.), and with applications to certain important classes of groups: Artin groups, graphs products, etc. 

Large-scale geometry of groups 

Supervisor: Alessandro Sisto 

Description: Geometric group theory is the study of groups using geometry. More concretely, in order to do so one associates to a (finitely generated) group a certain metric space called Cayley graph. However, this is a slight lie as the Cayley graph in fact depends on a choice of generating set, but Cayley graphs associated to different generating sets share the same "large-scale geometry". That is, there is a notion of maps preserving the large-scale geometry of spaces, called quasi-isometries, and all Cayley graphs of a given group are quasi-isometric to each other. In view of this, in geometric group theory it is very natural to study groups up to quasi-isometry. 

The project would focus on studying the large-scale geometry of various groups of interest in algebra, geometry, and topology. more specifically, this involves studying properties that are invariant under quasi-isometries, as well as rigidity phenomena. 

Randomness in groups 

Supervisor: Alessandro Sisto 

Description: Given a group, it is natural to ask what a "generic" element of the group looks like. In order to make this question precise, one can introduce random walks, as those provide a model for a random, or generic, element of a group. There are also various constructions, for example in low-dimensional topology, where one parameter is an element of a certain group, so random walks also provide models for "generic" objects of other kinds, for example 3-manifolds. Part of the motivation to study random walks, besides the intrinsic interest, is that sometimes in order to prove the existence of objects of a certain kind, the best way to proceed is to show that a generic object satisfies the required property. 

This project focuses on properties and applications of random walks and other stochastic processes within a broad class of groups, called acylindrically hyperbolic groups, that provides a common framework to simultaneously study various groups of interest in algebra, geometry, and topology. 

Projects in Applied and Computational Mathematics – including Industrial Mathematics and Mathematical Biology & Ecology

Non-local operators: Applications and efficient computation

Supervisor: Lehel Banjai 

Description: Non-local interactions are ubiquitous in nature and lead to models that are difficult to handle accurately and efficiently. An example of this is the area of fractional differential operators, interest in which has been exploding in recent years among numerical analysts, probabilists, engineers, and mathematical analysts. Applications are wide ranging, including pattern formation in biology, therapeutic ultrasound in medicine, anomalous diffusion in finance and engineering etc. This is a huge and very active field. The project would address efficient computation of the difficult to compute fractional operators and applications to new areas. One interesting possibility is to look into the fractional steady state wave equation that has applications in, e.g., geophysics. Here much is still open including the qualitative behaviour of solutions, appropriate models, analysis of the solutions of the PDE model, and both the efficient computation and the analysis of the numerical schemes.   

Space-time numerical methods for nonlinear acoustics with applications to medical ultrasound  

Supervisor: Lehel Banjai 

Description: In this project we would consider the numerical solution of a class of nonlinear wave equations modelling medical ultrasound. One such model is described by the attenuated Westervelt equation. The standard procedure to solve the equation numerically is to first discretize in space by, e.g., the finite element or finite difference method. An explicit and implicit time-stepping applied to this semi-discretization gives rise to a heavily structured space-time discretization. Instead, in this project we will look at fully unstructured space-time meshes that can be adapted in both space and time to the wave travelling through the tissue. Such space-time finite element methods have been investigated since the end of the 80s. However, only lately has there been a surge of interest in them due to the ready availability of high-performance parallel computer infrastructure. Much is still open: optimal formulations, a-posteriori error analysis and adaptivity, efficient construction of space-time elements in 3+1D, solution of resulting linear systems (preconditioning, parallel direct methods) etc. The aim of this project is to look at some of these aspects.   

 Defect interaction in a crystalline lattice  

Supervisor: Julian Braun 

Description: Crystalline materials are solids in which the atoms follow the pattern of a periodic lattice. Defects are the imperfections in this lattice structure. As such they are crucial to fully understand the overall behaviour of the material. The aim of this project is an in depth analysis of the interaction of two or more defects on the atomic level. This should lead to the derivation of interaction laws on a larger scale while also giving the opportunity to develop new numerical methods for the computation of defect interaction.  

Modelling the transport of microplastics in the ocean  

Supervisor: Cathal Cummins 

Description: There are an estimated 5.25 trillion plastic pieces floating in the global oceans, with approximately 1.5 million tonnes of microplastics polluting the ocean each year. There is an observed size-based preferential loss of this plastic from the ocean surface into the water column, however, we still lack a full understanding of the mechanisms behind this process. This hinders our ability to map the distribution of microplastics in the ocean, monitor their ecological impact or plan for partial removal. However, we have recently made progress in developing a mathematical description of one important process, biofouling (the accumulation of algae on the surface of microplastics) and its role in the vertical movement of floating debris. One of the key findings of the work is that particle properties are the biggest factor in determining the particular excursions that microplastics make beneath the free surface.  

However, this study neglected inertial effects, such as added mass and history effects, which result from the lagging boundary layer development of an accelerating particle. It also did not consider the effects of non-spherical geometries. In a recent review, we discovered that history effects could only be neglected for microplastics of 55micron diameter and less in regular ocean conditions. Given that microplastics are defined as any plastic debris with diameter 1micron – 5mm, there remains a considerable range of microplastics whose dynamics should include an analysis of the history force. This project aims to investigate the influence of inertial and geometric effects on the migration and ultimate fate of biofouled particles in the ocean. 

Optimal algorithms for nonlinear partial differential equations.  

Supervisor: Sebastien Loisel 

Description: The efficient solution of partial differential equations plays an important role in all fields of application. For example, the stationary heat equation (the Laplacian or Poisson problem) asks for a solution $u$ to the partial differential equation $u_{xx} + u_{yy} = f$. One can discretize this problem, e.g. replacing the derivatives by finite differences, which yields a finite-dimensional linear problem. I am interested in nonlinear problems, such as the $p$-Laplacian. These problems are much harder, and published solvers often fail to converge, or converge very slowly.  

If the PDE is discretized on a grid with $n$ points, it is obviously impossible to solve a PDE in less than $O(n)$ time. In this project, we will investigate algorithms for solving nonlinear PDEs in almost $O(n)$ time.  

 Modelling Epithelial Wound Healing  

Supervisor: Jonathan Sherratt  

Description: The term epithelium refers to the surface layer of an organ, and it is the first line of defense to injury. The healing of epithelial wounds has been studied in great detail in the skin and the cornea of the eye – including a significant body of mathematical modelling. I am keen to develop these models to apply specifically to epithelia in other tissues, which can show significant points of difference such as a close interplay with the immune system. Work in this area is well suited to a student keen to apply partial differential equation models to a specific biological system with potential medical implications.  

Modelling Vegetation Patterns in Semi-Arid Regions  

Supervisor: Jonathan Sherratt  

Description: In regions where water is the limiting resource, plants often cluster together, forming large-scale spatial patterns. Mathematical models of this process have been studied for 20 years, and have contributed hugely to our understanding of the patterns. I am keen to develop these models, making them more realistic by including factors such as long-range. This will involve development of new mathematical methodologies, for example to construct numerical bifurcation diagrams for integro-partial differential equations.  The aims include identification of new signatures for imminent ecosystem collapse, and the development of optimal strategies for replanting of degraded landscapes.  

Ecological and epidemiological models of wildlife and livestock systems  

Supervisor: Andrew White 

Description: Mathematical models are key tools to understand the population and infectious disease dynamics of natural systems. Results from model studies have been used to guide policy decisions and shape conservation strategies to protect endangered species. Models typically focus on pairwise interactions, such as predator-prey or host-disease dynamics, but there is now evidence that the ecological community composition emerges through complex interactions where, for example, the interplay between competition, predation, disease transmission, seasonality and spatial structure can all play a key role. Examples include how the shared pathogen, squirrelpox, and the shared predator, the pine marten, can alter the outcome of species competition between red and grey squirrels and how the re-introduction of a native predator species, wolves, can reduce the prevalence of tuberculosis in wildlife prey species such as wild boar and deer and thereby reduce the chance of disease spillover to livestock populations.

This project would aim to develop new models and theory that capture the complexity of the real world by examining complex species interactions which integrate the effects of competition, predation and disease across trophic levels. The models will be developed in collaboration with biologists with expertise in the red and grey squirrel, squirrelpox and pine marten case study system in the UK and Ireland, with biologists who examine pathogen diversity at the interface between wildlife and livestock populations in Spain and in collaboration with the theoretical ecology group at UC Berkeley, USA.

Statistical Control of the Ecological Risks of Fisheries

Supervisor: Abdul-Lateef Haji-Ali

Description: Fishing competes with predators, such as birds and seals, for resources and might impact their populations. This PhD project will use state-of-the-art statistical methods to analyse the dynamics involved in fisheries and develop new models and tools to manage the ecological and financial impacts of fishing. The analysis of ecosystem dynamics will make use of extensive data including remote sensed environmental variables and time series of bird, mammal and fish population and performance estimates. The results of this analysis will then be used in a simulation framework to develop and test feedback methods for managing the fisheries to control the risks to marine ecosystems while maintaining economic benefits of fishing.

Bilevel optimisation for inverse problems: analysis, fast computations, and Bayes.

Supervisors: Dr Jonas Latz, Dr Abdul-Lateef Haji-Ali

Description: Inverse problems concern the estimation of parameters of mathematical models given real world data: we estimate the permeability of a groundwater reservoir using measurements of the hydrostatic pressure in the reservoir, we reconstruct the position and shape of a tumour using attenuated X-rays in medical imaging, and we train the weight and bias matrices in a deep neural network that aims at distinguishing cats and dogs. Inverse problems are usually not uniquely solvable or their solution is brittle with respect to small perturbations in the data -- they are ill-posed.

Two ways that can often overcome ill-posedness are regularisation on the one hand and the Bayesian approach on the other hand. The regularisation approach consists in minimising a certain functional that is a sum of the negative log-likelihood of the observed data given the unknown parameter and an additional term that has favourable properties. The Bayesian approach is probabilistic: we model the unknown parameter as a random variable being distributed according to the so-called prior distribution. Using the aforementioned likelihood and Bayes' formula, we can then obtain the posterior distribution - that is the conditional distribution of the parameter given the data observation. The posterior can be used for point estimation and uncertainty quantification.

Regulariser and prior can have a large influence on the solution of the inverse problem and an appropriate choice is hard. In bilevel optimisation, we aim to `learn' the regulariser based on available data. Such a parameter can be a simple prefactor of a usual regulariser [Reyes et. al; Journal of Mathematical Imaging and Vision 57: 1–25 (2017)] that needs to be determined or the regulariser can be completely determined by a neural network [Mukherjee et al.; ArXiv 2008.02839 (2020)], where then weights and biases need to be learned.

We commence this project by looking at the method by [Antil et al.; Inverse Problems, 36: 064001 (2020)], which presents an interesting method to estimate the fraction of a fractional Laplacian that is used for regularisation. Here, the primary goal is to find a more scalable version of this algorithm through modern techniques in numerical linear algebra, allowing us to reconstruct large scale medical images. Future work may include: bilevel optimisation of prior distributions in Bayesian inversion, learning of sparse dictionaries through optimisation on manifolds, other fractional operators (such as total variation), and a continuous-time analysis of stochastic gradient descent in bilevel optimisation [Jin et al.; ArXiv 2112.03754 (2021)].

Numerical analysis for multiscale bulk-surface PDEs

Supervisor: Mariya Ptashnyk

Description: Coupled systems of nonlinear partial differential equations defined in bulk domains and on surfaces are naturally arising in modelling of many biological and physical systems. Popular examples are models for intercellular signalling processes, crucial to all biological processes in living tissues. In such models the dynamics of signalling molecules in intercellular and/or intracellular spaces (bulk domain) are coupled to the dynamics of receptors on cell membranes (surface). In this project we will consider the design and analysis (a priori and a posteriori error analysis) of numerical schemes for the multiscale bulk-surface problems, when considering processes on two different spatial scales (e.g. on the level of a single cell and on a tissue level). (joint with C. Venkataraman, University of Sussex)

Numerical analysis for nonlocal cross-diffusion systems

Supervisor: Mariya Ptashnyk (joint with Lehel Banjai, Heriot-Watt University)

Description: Cross-diffusion systems arise in modelling many different biological and physical processes, e.g. the movement of cells, bacteria or animals, transport through ion-channels in cells, tumour growth, gas dynamics, carrier transport in semiconductors, with the chemotaxis system being one of the most important examples of a cross-diffusion system. The motivation for considering nonlocal cross-diffusion systems, where the Laplacian, modelling random walk, is replaced by the fractional Laplacian is derived from the experimental observation that both in the context of cell motility and population dynamics in certain situations organisms move according to Lévy processes. In this project we will consider the design, analysis and implementation of the efficient numerical schemes for the simulation of nonlocal cross-diffusion systems. There two main challenges in the numerical simulations of the fractional cross-diffusion systems: the cross-diffusion terms and the nonlocality of the fractional Laplacian.

Multiscale methods and Multiscale Interacting particle systems

Supervisor: Michela Ottobre

Description: Many systems of interest in the applied sciences share the common feature of possessing multiple scales, either in time or in space, or both. Some approaches to modelling focus on one scale and incorporate the effect of other scales (e.g. smaller scales) through constitutive relations, which are often obtained empirically. Multiscale modelling approaches are built on the ambition of treating both scales at the same time, with the aim of deriving (rather than empirically obtaining) efficient coarse grained models which incorporate the effects of the smaller/faster scales. Multiscale methods have been tremendously successful in applications, as they provide both underpinning for numerics/simulation algorithms and modelling paradigms in an impressive range of fields, such as engineering, material science, mathematical biology, climate modelling (notably playing a central role in Hasselmann’s programme, where climate/ whether are seen as slow/fast dynamics, respectively), to mention just a few.

More detail. In this project, which is in the field of applied stochastic analysis, we will consider systems that are multiscale in time, with particular reference to multiscale interacting particle systems. We will try to understand how the multiscale approximation interacts with the mean field approximation (produced by letting the number of particles in the systems to infinity so to obtain a PDE for the evolution of the density of the particles). In the context we will consider the systems at hand will have two scales, so called fast and slow scale, each of them modelled by appropriate stochastic differential equations. Classical multiscale paradigms consider the setting in which the fast scale has a unique invariant measure (equilibrium). We will consider the often more realistic scenario in which the fast process has multiple invariant measures. The motivation for this project comes especially from models in mathematical biology, but the applicability of the framework we will investigate is broader. The candidate that will work on this project will be open to investigate both theoretical and modelling aspects – though developing their own preference in time.

Interacting Particle systems and Stochastic Partial Differential Equations

Supervisor: Michela Ottobre

Description: This project belongs to the broad field of applied stochastic analysis.
Many systems of interest consist of a large number of particles or agents, (e.g. individuals, animals, cells, robots) that interact with each other. When the number of agents/particles in the system is very large the dynamics of the full Particle System (PS) can be rather complex and expensive to simulate; moreover, one is quite often more interested in the collective behaviour of the system rather than in its detailed description (e.g. bird flocking). In this context, the established methodology in statistical mechanics and kinetic theory is to look for simplified models that retain relevant characteristics of the original PS by letting the number N of particles to infinity (so called mean field limit); the resulting limiting equation for the density of particles is a low dimensional, (in contrast with the initial high dimensional PS) non-linear partial differential equation (PDE), where the non-linearity has a specific structure, commonly referred to as a McKean-Vlasov nonlinearity. Beyond an intrinsic theoretical interest, such models were proposed with the intent to efficiently direct human traffic, to optimize evacuation times, to study rating systems, opinion formation, etc; and in all these fields they have been incredibly successful.
More detail. In this project we will consider PSs modelled by Stochastic Differential Equations (SDEs) whose limiting behaviour is described by either a deterministic PDE or a stochastic PDE (SPDE). It is indeed important to notice that, depending on the nature of the stochasticity in the PS, the limiting equation can be either a deterministic PDE – and this would be the most classical framework – or a stochastic PDE. Either way, the limiting equation is of McKean-Vlasov type. The overall aim of the project is the comparison of the ergodic and dynamic properties of the particle system and of the limiting PDE/SPDE. These results will help inform modelling decisions for practitioners. From a theoretical standpoint, one of the purposes of this project will be to push forward the ergodic theory for SPDEs. Keywords for this project are: Stochastic (Partial) Differential equations, McKean Vlasov evolutions, ergodic theory, mean field limits. Prerequisites: good background in either stochastic analysis/probability or analysis

Factors influencing the time to disease fade-out. 

Supervisor: Damian Clancy 

Description: The spread of infectious disease through a population is an inherently random process, and can be studied using stochastic models. For diseases which become endemic in a population, one object of interest is the time until fade-out of infection (a random variable). The expected time to fade-out may be computed straightforwardly through Monte-Carlo simulation, or more exactly from general Markov process theory. For more complicated models, implementing these approaches becomes less straightforward, and approximation methods may also be needed. In this project, you will investigate a variety of approaches, with the aim of understanding the effects of particular disease features upon time to fade-out. There are many different features of different diseases that you could study - for instance, you might examine the impact of environmental transmission upon disease persistence, or the effects of changes in the birth and death rates of the susceptible population.   

References:  

"Approximating time to extinction for endemic infection models" by Damian Clancy and Elliott Tjia (2018), Methodology and Computing in Applied Probability volume 20, pages 1043–1067 10.1007/s11009-018-9621-8 

"The Influence of Latent and Chronic Infection on Pathogen Persistence" by A. O'Neill, A. White, D. Clancy, F. Ruiz-Fons & C. Gortázar (2021), Mathematics volume 9, article number 1007 https://doi.org/10.3390/math9091007 

Trustworthy Deep Learning Strategies for Inverse Problems in Imaging

Supervisor: Audrey Repetti (HWU)  

Potential Co-Supervisors: Julie Delon (Université Paris-Cité), Nelly Pustelnik (ENS Lyon), Ulugbek Kamilov (Washington University)

Description: Inverse problems are central to many imaging applications, including medical imaging, remote sensing, and astronomy. For a few decades, iterative optimisation methods have been state-of-the art for solving such problems. However, they often face limitations in terms of computational and reconstruction efficiency. Recent advances in deep learning offer powerful tools for solving inverse problems, but concerns about the trustworthiness, interpretability, and reliability of these methods hinder their broader adoption in critical applications.

This project aims to develop trustworthy deep learning strategies for inverse problems by leveraging powerful mathematical tools such as optimisation, Bayesian and optimal transport theories. Specifically, we will focus on hybrid novel strategies mixing the power of deep learning and neural networks, with the theoretical guarantees of mathematics. In this context, three main research directions are of great interest: building powerful models for imaging inverse problems; investigating robustness and convergence guarantees of data-driven methods; and exploring frugal learning strategies to tackle computational complexity challenges. 
Ultimately, the goal is to bridge the gap between data-driven deep learning approaches and traditional model-based methods, offering a framework that is both practically effective and theoretically grounded for solving complex inverse problems in imaging.
Such a project englobes research directions at the core of multiple communities including inverse problems, computational imaging, optimisation/OR, computational mathematics, machine learning.

Related recent works can be found here: https://sites.google.com/view/audreyrepetti/research/publications 

Pre-requisites: Background on topics related to optimisation, OR, optimal transport, or foundation of machine learning would be appreciated, as well as knowledge of Python.

Innovative approaches to uncertainty quantification for multiscale kinetic equations

Supervisor(s): Lorenzo Pareschi (HWU)   Co-supervisor: Emmanuil Georgoulis (HWU)

The main objective of the Ph.D. project is the development of advanced numerical methods for solving systems governed by multiscale partial differential equations (PDEs) that depend on various uncertain parameters, such as initial and boundary conditions or external sources. These uncertainties are particularly prominent in models derived from empirical data rather than first principles, such as in environmental modeling, epidemiology, finance, and social sciences. In these cases, the challenge lies in estimating how uncertainty in the parameters influences the solution, a problem that becomes more complex with the increasing dimensionality—often referred to as the “curse of dimensionality.”

To tackle this, the project will explore both deterministic numerical methods and stochastic particle-based approaches, such as Monte Carlo or particle-in-cell methods, which rely on random sampling to efficiently handle high-dimensional problems. Additionally, machine learning techniques will be employed to build surrogate models that can approximate solutions quickly by leveraging experimental data. These models offer a promising alternative for reducing computational costs, particularly in scenarios where real-time or repeated simulations are required.

The project spans classical fields such as fluid dynamics and kinetic theory, while also focusing on modern applications with pronounced uncertainty. Prerequisites include knowledge of numerical analysis, with a focus on numerical methods for ordinary differential equations, familiarity with partial differential equations (PDEs), and a solid foundation in probability theory.

Related references 

– Giacomo Dimarco, Lorenzo Pareschi, Multi-scale variance reduction methods based on multiple control variates for kinetic equations with uncertainties, Multiscale Model. Simul. 18 (2020), no. 1, 351-382.

– Giacomo Dimarco, Lorenzo Pareschi, Numerical methods for kinetic equations. Acta Numerica 23, 369-520, 2014.

– Andrea Medaglia, Lorenzo Pareschi, Mattia Zanella, Stochastic Galerkin particle methods for kinetic equations of plasmas with uncertainties, J. Comp. Phys. Volume 479, 112011, 2023

– Lorenzo Pareschi, An introduction to uncertainty quantification for kinetic equations and related problems, in Trails in kinetic theory: foundational aspects and numerical methods, SEMA-SIMAI Springer Series 25:141-181, 2021.

Advanced stochastic particle optimization methods and applications to machine learning

Supervisor:  Lorenzo Pareschi (HWU)   Co-supervisor: Michela Ottobre (HWU)

Description: The relationship between optimization processes and systems of interacting stochastic particles has its roots in the field of “Swarm Intelligence”, where the coordinated behavior of agents interacting locally with their environment leads to the emergence of global patterns.

Stochastic particle dynamics are typically guided by heuristics, and the resulting methods often excel at solving complex optimization problems where conventional deterministic methods fall short.

Gradient-based optimizers are effective at finding local minima for high-dimensional, convex problems; however, most gradient-based optimizers struggle with noisy, discontinuous functions and are not designed to handle discrete and mixed discrete-continuous variables.

Such high-dimensional problems arise in many areas of interest in machine learning applied to fields like engineering, finance, healthcare, and more.

The main goal of this Ph.D. project lies at different levels, ranging from the development of a robust mathematical framework for popular metaheuristics like simulated annealing, particle swarm optimization, genetic algorithms, and ant colony optimization, to their convergence analyses and applications to machine learning problems.

Furthermore, the relationships between well-known optimizers like SGD, Adam, RMSprop and existing metaheuristics will be studied by means of their continuous representation.

Prerequisites for the Ph.D. project include knowledge of at least one course in numerical analysis, covering multidimensional optimization methods, and some familiarity with PDEs, such as a course on the mathematical analysis of PDEs or one on the fundamental equations of mathematical physics. Knowledge of probability theory is also recommended.

Related references:

– Alessandro Benfenati, Giacomo Borghi, Lorenzo Pareschi, Binary interaction methods for high dimensional global optimization and machine learning, Applied Math. Optim. 86(9):1-41, 2022.

– Giacomo Borghi, Michael Herty, Lorenzo Pareschi, Constrained consensus-based optimization, SIAM J. Optimization 33(1):10.1137, 2023.

– Giacomo Borghi, Lorenzo Pareschi, Kinetic description and convergence analysis of genetic algorithms for global optimization, Comm. Math. Sci. to appear. Preprint arXiv:2310.08562, 2023.

– Lorenzo Pareschi, Optimization by linear kinetic equations and mean-field Langevin dynamics, Math. Mod. Meth. App. Sci. to appear. Preprint arXiv:2401.05553, 2024

Non-commutative integrable systems

Supervisor: Simon Malham

Description:  This project concerns the integrability of non-commutative nonlinear partial differential equations. In particular, the non-commutative Kadomtsev-Petviashvili (KP) hierarchies, and their modified forms, are very much of interest. Establishing their integrability by direct linearisation would be one goal. An underlying operator algebra, the pre-Poppe algebra, provides a natural context for establishing direct linearisation for these hierarchies. These hierarchies have an incredibly rich structure and many applications, for example, in nonlinear optics, ferromegnetism and Bose-Einstein condensates. They have intimate connections to: string theory and D-branes; Jacobians of algebraic curves and theta functions; Fredholm Grassmannians; the KPZ equation for the random growth off a one-dimensional substrate; and so forth. In addition to these aspects, there are many further directions that could also be explored, for example: the log-potential form; super-symmetric extensions; establishing efficient numerical methods via the direct linearisation approach; etc.  

[1] Doikou, A., Malham, S.J.A. and Stylianidis, I. 2021, Grassmannian flows and applications to non-commutative non-local and local integrable systems, Physica D 415, 132744.

[2] Malham, S.J.A. 2022, Integrability of local and nonlocal non-commutative fourth order quintic NLS equations, IMA J. Appl. Math. 87(2), 231-259.

[3] Malham, S.J.A. 2022, The non-commutative Korteweg-de Vries hierarchy and combinatorial Poppe algebra, Physica D 434.

[4] Blower, G., Malham, S.J.A. 2023, The algebraic structure of the non-commutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchy, Physica D 456, 133913.

[5] Blower, G., Malham, S.J.A. 2024, Direct linearisation of the non-commutative Kadomtsev-Petviashvili equations, submitted.

Coagulation sol-gel phenomena

Supervisor: Simon Malham

Description:  This project concerns Smoluchowski coagulation and sol-gel models. We consider scenarios where particles of different sizes coalesce to form larger particles, including possibly a "gel" state. There are many applications including: aerosols, clouds/smog, clustering of stars/galaxies, schooling/flocking, genealogy, nanostructures on substrates such as ripening or island coarsening, blood clotting and polymer growth, for example, in biopharmaceuticals. The goal is to find analytical solutions as well as construct efficient numerical simulation methods. Determining the dual sol-gel state is an important aspect of these models. Planar tree structures play an important role as well, both at the nonlinear partial differential equation model level, and at the particle model level where, naturally, coalescent stochastic processes represent their overall evolution. Including spatial diffusivity in the model at both these levels, for example to model colloids, adds another complexity. There are many directions to explore, for example: the particle interactions can be much more complex; there is a natural Hopf algebra of planar trees that is likely useful in optimising numerical simulation; investigating the dual reverse time, branching Brownian motion, perspective; and so forth.

[1] Doikou, A., Malham, S.J.A., Stylianidis, I. and Wiese A., 2023, Applications of Grassmannian flows to coagulation equations, Physica D 451, 133771.

[2] Malham, S.J.A. 2024, Coagulation, non-associative algebras and binary trees, Physica D 460, 134054.

Projects in Analysis and Probability

Slip and twinning in single crystals and polycrystals

 Supervisor: John Ball

 Description: The aim of the project is to analyse the interaction between slip (related to plasticity) and twinning (deformations in which the crystal lattices on either side of an interface are reflected with respect to some direction) in single crystals and polycrystals. This has proved important in recent work of materials scientists (see [2,3]).  An initial step could be to extend some of the results in [1], based on the Ericksen energy-well picture, to situations allowing solid phase transformations. More generally the study of resulting microstructures leads to deep issues in the calculus of variations related to quasiconvexity, but progress should be possible in some interesting simplified situations.

The project offers training in modern techniques of nonlinear partial differential equations and the calculus of variations applied to materials science.

References:

[1] J. M. Ball, Slip and twinning in Bravais lattices, J. Elasticity 155, 763–785, 2024.

[2] H. Seiner, P. Sedlák, M. Frost and P. Sittner,  Kwinking as the plastic forming mechanism of B19 ′ NiTi martensite. International Journal of Plasticity, 168, 103697, 2023.

[3] T. Inamura,  Geometry of kink microstructure analysed by rank-1 connection, Acta Materialia, 173, 270-280, 2019.

Nonlinear elasticity and computer vision

 Supervisor: John Ball

 Description:  The project concerns the comparison of images by minimizing a functional depending on a map taking one image to the other and on features of the images. Part of the functional is the same as that for nonlinear elasticity. In previous work (see [1,2]) some basic properties of this model, such as existence of minimizers, have been established, and conditions found under which for linearly related images the minimization delivers the corresponding linear map as the unique minimizer. The project will extend this work in several directions, in particular testing the minimization algorithm numerically, and considering the effect of adding a term depending on second derivatives of the deformation to the functional.

The project offers a training in nonlinear analysis, especially the calculus of variations, in related numerical methods, and in nonlinear elasticity itself.

References:

[1]  J. M. Ball and C. L. Horner, Image comparison and scaling via nonlinear elasticity. In L. Calatroni,

M. Donatelli, S. Morigi, M. Prato, and M. Santacesaria, editors, Scale Space and Variational Methods

in Computer Vision, pages 565–574, Cham, 2023. Springer International Publishing.

[2] J. M. Ball and C. L. Horner, A nonlinear elasticity model in computer vision, submitted, arXiv:2408.17237.                                                 

Random graphs and networks: limits, approximations, and applications 

Supervisors: Fraser Daly and Seva Shneer 

Description: We will consider some models for random graphs where some nodes may form more connections than others. Such models include non-homogeneous graphs and configuration models. We aim to study various measures of connectedness of such graphs, for instance the probabilities of randomly chosen nodes forming cliques or other subgraphs.

We aim to find asymptotics as well as approximations for these measures of connectedness as the size of the graph is going to infinity. Techniques which could be applied here include Stein's method for probability approximations. Extensions of this work include analogous results for dynamic random graphs evolving in time and for stochastic processes on random graphs. 

  Approximations for Random Sums with Dependence 

Supervisors: Fraser Daly  and Seva Shneer

Description: Sums of a random number of random variables have applications in many areas, including insurance, where they can be used to represent the total claim amount received within a given year: a random number of claims is received, each of which is for a random amount. Classically, these individual claim amounts are assumed to be independent and identically distributed, and independent of the number of claims received. This allows us to derive approximations for the distribution of the total claim amount, for example a Gaussian approximation using the central limit theorem. However, these assumptions of independence are unrealistic, and we would like to relax them.

The aim of this project is to derive and investigate explicit distributional approximations for sums of a random number of random variables with dependence, using Stein's method for probabilistic approximation and other relevant tools.    

References:

[1] L. H. Y. Chen, L. Goldstein and Q.-M. Shao (2011). Normal Approximation by Stein's Method. Springer, Berlin.  

[2] F. Daly (2021). Gamma, Gaussian and Poisson approximations for random sums using size-biased and generalized zero-biased couplings. Scandinavian Actuarial Journal, to appear.

Last passage percolation in directed random graphs.

Supervisor: Sergey Foss

Description:  The aim of the project is to study various properties of the maximal path length (or weight) in growing random directed graphs where the vertices are partially ordered and edges may exist only from smaller to bigger vertices.

Prospective results include (but are not limited to) analysis of performance characteristics, various limit theorems and simulation algorithms for the growth rate and variance, with a number of applications in biology and computer sciences.

An introduction to the subject may be found in the recent overview paper

S. Foss, T. Konstantopoulos, B. Mallein and S. Ramassami,

 ``Last passage percolation and limit theorems in Barak-Erdős directed random graphs and related models'',

Probability Surveys, vol. 21, pp. 67--170, 2024. 

https://arxiv.org/abs/2312.02884

Heavy-tailed distributions in the SGD algorithms

Supervisor: Sergey Foss

Description:  The project relates to developing and analysing of performance and limiting properties of stochastic difference equations in discrete time that contain random components with heavy-tailed distributions.

The problems of this kind do naturally appear in the machine learning literature. It has repeatedly been observed that loss minimization by stochastic gradient descent (SGD) leads to heavy-tailed distributions of neural network parameters.

Basic and advanced properties of heavy-tailed distributions, as well as many exercises, may be found in the book

S. Foss, D. Korshunov and S. Zachary, 

An introduction to heavy-tailed and subexponential distributions, 2nd edition.

Springer, 2013.

Epidemics with migration 

Supervisors:  Sergey Foss and Seva Shneer 

Description: We study models where agents may arrive into a system, may change their state (for instance, infected, susceptible, exposed, recovered) and may change their location in the system. We plan to study the stationary regime of such a system. Questions of interest include conditions for extinction/survival of the epidemic and analysis of the effects of mobility on the spread of the epidemic. 

References:

[1] F. Baccelli, S. Foss, S. Shneer (2024). Migration-contagion processes. Advances in Applied Probability, 56, 1, 71-105.

Convergence of stochastic processes with applications to computational statistics and machine learning

Supervisor: Mateusz Majka

Description: The project will be concerned with the convergence to equilibrium of several different types of stochastic processes, including solutions of stochastic differential equations (driven by either Brownian motion or jump processes) and interacting particle systems, as well as their discrete-time counterparts (see papers [2] and [4]). We will also explore the connections between stochastic processes and optimization methods on the space of probability measures [3]. Results of such type, besides their theoretical significance, have found numerous applications in computational statistics and machine learning. For instance, by employing the probabilistic coupling technique, one can obtain precise convergence rates of numerous Monte Carlo algorithms that are constructed by utilizing discretisations of stochastic differential equations, and are used in computational statistics for sampling from high dimensional probability distributions [1]. On the other hand, mathematical tools such as functional inequalities provide convergence rates for certain optimization algorithms on the space of probability measures that are directly connected to machine learning applications such as training neural networks [3]. Depending on the candidate's interests, the project can focus either on the numerical/computational aspect or on developing the underlying mathematical theory. For more details about my research interests and my research team, please see my website https://sites.google.com/site/mateuszbmajka

References:

4. L. Liu, M. B. Majka and P. Monmarché, L^2-Wasserstein contraction for Euler schemes of elliptic diffusions and interacting particle systems, Stochastic Process. Appl. 179 (2025), 104504.

3. R.-A. Lascu, M. B. Majka, D. Šiška and Ł. Szpruch, Linear convergence of proximal descent schemes on the Wasserstein space, https://arxiv.org/pdf/2411.15067.

2. M. Liang, M. B. Majka and J. Wang, Exponential ergodicity for SDEs and McKean-Vlasov processes with Lévy noise, Ann. Inst. Henri Poincaré Probab. Stat. 57 (2021), no. 3, 1665-1701.

1. M. B. Majka, A. Mijatović and Ł. Szpruch, Non-asymptotic bounds for sampling algorithms without log-concavity, Ann. Appl. Probab. 30 (2020), no. 4, 1534-1581.

Non-commutative integrable systems

Supervisor: Simon Malham

Description:  This project concerns the integrability of non-commutative nonlinear partial differential equations. In particular, the non-commutative Kadomtsev-Petviashvili (KP) hierarchies, and their modified forms, are very much of interest. Establishing their integrability by direct linearisation would be one goal. An underlying operator algebra, the pre-Poppe algebra, provides a natural context for establishing direct linearisation for these hierarchies. These hierarchies have an incredibly rich structure and many applications, for example, in nonlinear optics, ferromegnetism and Bose-Einstein condensates. They have intimate connections to: string theory and D-branes; Jacobians of algebraic curves and theta functions; Fredholm Grassmannians; the KPZ equation for the random growth off a one-dimensional substrate; and so forth. In addition to these aspects, there are many further directions that could also be explored, for example: the log-potential form; super-symmetric extensions; establishing efficient numerical methods via the direct linearisation approach; etc.  

[1] Doikou, A., Malham, S.J.A. and Stylianidis, I. 2021, Grassmannian flows and applications to non-commutative non-local and local integrable systems, Physica D 415, 132744.

[2] Malham, S.J.A. 2022, Integrability of local and nonlocal non-commutative fourth order quintic NLS equations, IMA J. Appl. Math. 87(2), 231-259.

[3] Malham, S.J.A. 2022, The non-commutative Korteweg-de Vries hierarchy and combinatorial Poppe algebra, Physica D 434.

[4] Blower, G., Malham, S.J.A. 2023, The algebraic structure of the non-commutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchy, Physica D 456, 133913.

[5] Blower, G., Malham, S.J.A. 2024, Direct linearisation of the non-commutative Kadomtsev-Petviashvili equations, submitted.

Coagulation sol-gel phenomena

Supervisor: Simon Malham

Description:  This project concerns Smoluchowski coagulation and sol-gel models. We consider scenarios where particles of different sizes coalesce to form larger particles, including possibly a "gel" state. There are many applications including: aerosols, clouds/smog, clustering of stars/galaxies, schooling/flocking, genealogy, nanostructures on substrates such as ripening or island coarsening, blood clotting and polymer growth, for example, in biopharmaceuticals. The goal is to find analytical solutions as well as construct efficient numerical simulation methods. Determining the dual sol-gel state is an important aspect of these models. Planar tree structures play an important role as well, both at the nonlinear partial differential equation model level, and at the particle model level where, naturally, coalescent stochastic processes represent their overall evolution. Including spatial diffusivity in the model at both these levels, for example to model colloids, adds another complexity. There are many directions to explore, for example: the particle interactions can be much more complex; there is a natural Hopf algebra of planar trees that is likely useful in optimising numerical simulation; investigating the dual reverse time, branching Brownian motion, perspective; and so forth.

[1] Doikou, A., Malham, S.J.A., Stylianidis, I. and Wiese A., 2023, Applications of Grassmannian flows to coagulation equations, Physica D 451, 133771.

[2] Malham, S.J.A. 2024, Coagulation, non-associative algebras and binary trees, Physica D 460, 134054.

Markov Chain Monte Carlo with applications to Computational Imaging and Machine Learning

Supervisors: Konstantinos Zygalakis (UoE), Paul Dobson (HWU)

Description: Modern data science relies strongly on probability theory to solve challenging problems. In this context, probabilistic models represent the raw data observation process and the prior knowledge available; and solutions are then obtained by performing (often Bayesian) statistical inference analyses. From a computation viewpoint, these analyses are conducted using Markov chain Monte Carlo (MCMC) algorithms, stemming from the theory of Markov processes. Constructing efficient MCMC algorithms for high-dimensional problems is difficult (e.g., problems related to image processing and machine learning), and this has stimulated a lot of research on efficient high-dimensional MCMC algorithms and theoretical structures (relying mostly on the theory of Markov processes) to understand, analyse and quantify their efficiency. In particular, MCMC algorithms based on SDEs have received a lot of attention lately, leading to some major developments in highly efficient MCMC methodology.

In this project, we will be looking to combine ideas from different areas of applied probability and machine learning to develop further highly efficient MCMC algorithms for solving high dimensional arising in computational imaging and machine learning.

In addition to joining the Maxwell Institute this PhD is also related to the Probabilistic AI hub a collaboration between 6 leading UK universities.

References:

A. Durmus, G. O. Roberts, G. Vilmart, K. C. Zygalakis, Fast Langevin based algorithm for MCMC in high
dimensions Ann. App. Prob., 27(4), 2195-2237, (2017).

J. M. Sanz Serna, K. C. Zygalakis, Wasserstein distance estimates for the distributions of numerical
approximations to ergodic stochastic differential equations. J. Mach. Learn. Res., 22, 1–37, (2021).

T. Klatzer, P. Dobson, Y. Altmann, M. Pereyra, J. M. Sanz-Serna, and K. C. Zygalakis Accelerated Bayesian
imaging by relaxed proximal-point Langevin sampling. SIAM J. Imaging Sci. 17(2) 1078-1117, (2024).

The s-numbers of higher order Sobolev embeddings

Supervisor: Lyonell Boulton

Description: The theory of embeddings between function spaces began with the need to prove existence and uniqueness results about the solution of partial differential equations.  The theory florished as the demand for solving more complicated equations drew a need to suply sharper inequalities. This symbiosis is illustrated by the classical Sobolev inequality discovered in the 1930s. In many applications it is enough to show existence of the constant, but for some other applications the precise value of the minimal constant is required. One example of this is in the theory surounding the so-called Euclidean isoperimetric inequality. 

The optimal constant for the first order Sobolev inequality on a segment with zero boundary conditions was computed by Talenti in 1976. Since then, rather little has been studied about the case of Sobolev spaces of higher order. An exception to this is the case of second order embeddings examined recently in [1], using tools from [3]. The purpose of this project will be a thorough investigation of properties of the optimal constant, and the so-called singular numbers for higher order embeddings. 

A good starting point to understand the background of this project, is the

book [2].

References:

[1] L. Boulton and J. Lang, Nonlinear Analysis. 236 (2023), 113362.

[2] D.E. Edmunds and J. Lang, Eigenvalues, Embeddings and Generalised

Trigonometric Functions. Springer-Verlag, Berlin, 2011.

[3] D.E. Edmunds, P. Gurka and J. Lang, J. Approx Theo. 164 (2012), 47-56.

Are regular polygons optimal in relativistic quantum mechanics?

Supervisor: Lyonell Boulton 

Description: Among all polygons with the same perimeter, regular polygons are known to minimise the ground energy of the Dirichlet Laplacian. This is also the case, if we consider the area as the fixed geometrical invariant rather than the perimeter. In the language of quantum mechanics, this can be re-interpreted by saying that the non-relativistic Schrodinger operator on boxes with the same perimeter or area, attains its minimal energy when the box has a regular base. These results can be traced back to the work of Polya in the 1950s and, arguably, they have enabled the rise to a whole new area of research in Geometrical Spectral Theory which is still an active subject of enquiry with strong links to Mathematical Physics.

The aim of this PhD project is to develop research in the following direction. Suppose that we replace the Schrodinger operator with the free Dirac operator and pose the same question now in the relativistic setting. Will the box with a regular base be the optimal minimising energy shape (in absolute value) among all others?

Recent progress on this problem include the paper [1], which considers general regions and [2], where this question is posed, but not solved. It appears that, even when the region is rectangular, an answer as to whether the square is indeed the optimal shape is not so easy to tackle.

A concrete  initial stage of the project will be the analytic investigation of the problem on quadrilateral regions, trying to identify a shape for which the problem can be reduced and treated with exact formulas. Depending on progress we might consider numerical investigations along the lines of [3], in order to gain an insight on further lines of enquiry. The initial phase of the project might lead towards different directions, including the addition of magnetic or electic fields.

References: 

[1] Ann. Henri Poincaré 19, 1465–1487 (2018)

[2] J. Math. Phys. 63, 013502 (2022)

[3] App. Num. Math. 99, 1–23 (2016)

The Laplacian eigenvalues on regions with symmetries.  

Supervisors: Lyonell Boulton and Beatrice Pelloni 

Description: The eigenvalues of the Laplacian on a rectangle can easily be found in terms of trigonometric functions. On an ellipse they can be found in terms of Bessel functions. From these two examples, we might be tempted to assume that then it is possible to find the eigenvalues on other simple regions (for example by arguments involving symmetry). Indeed they can be computed exactly on a straight isosceles triangle in terms of those on the square. In general, however, this is not true. Even for a generic triangle (the simplest possible 2D region), it is not true in that we can always find a close expression for the smallest eigenvalue or the corresponding eigenfunction.  

Regions for which we know the exact eigenvalues include the equilateral triangle. A list of these was first computed by Gabriel Lamé in the 19th Century and the arguments for the calculation are very sophisticated, involving a tessellation of the plane by means of parallelepipeds and studying various symmetry groups. It is remarkable that it was only in 1985, that the first full proof that Lamé’s list was complete was found [1].  

Recently, a new approach by Fokas and Kalimeris [3], seems to provided an effective mechanism to compute the full list of eigenvalues for the equilateral triangles with Dirichlet, Neumann and other natural boundary conditions. This techinique appears to be very promising and it is yet to be tested on other regions. 

This PhD project will begin by analysing and comparing the proofs made in [1] and [3], alongside with a simpler proof found in [2], that Lamé's list of eigenvalues on the triangle is complete. Then, we will move onto investigating the following problem. Can we compute explicitly the eigenvalues on a regular hexagon? Then, what about other regular polygons in general? For this, the case of mixed boundary conditions (Dirichlet and Neumann) on triangles might be a natural line of enquiry.  

See [4] for a full list of references on the subject. 

References: 

[1] M. Pinsky, SIAM J. Math. Anal. 16 (1985) 848-851. 

[2] B.J. McCartin, SIAM Rev. 45 (2003), 267-287. 

[3] T. Fokas and M. Kalimeris, Comp. Methods and Funct. Theo. 14 (2014) 10-33. 

[4] D.S. Grebenkov and B.-T. Nguyen, SIAM Rev. 55 (2013) 601-667. 

Travelling-wave solutions for models of growth, interaction, depletion and diffusion 

Supervisor: Seva Shneer 

Description:  We will study interacting-particle systems relevant for the study of scheduling mechanisms in stochastic networks such as redundancy and load balancing. The models are also of interest in other application areas ranging from biology to communication networks. The evolution of states of particles consists of mechanisms of growth, interaction, depletion and diffusion. We will aim at characterising their scaling limits, in particular in terms of travelling waves. 

Semi-discrete optimal transport theory: Numerical methods and applications 

Supervisor: David Bourne 

Description: Optimal transport theory goes back to 1781 and the French engineer Gaspard Monge, who wanted to find the optimal way of transporting soil for building earthworks for Napoleon's troops. While Leonid Kantorovich made some progress on the problem in the 1940s with the invention of linear programming, the problem remained unsolved for over 200 years. In fact it was not even known whether there existed a solution until some big mathematical breakthrough in the 1980s and 1990s. These theoretical advances opened the floodgates to applications. Optimal transport theory is now applied to PDEs, geometry, economics, image processing, crowd dynamics, statistics, machine learning, and the list goes on. In July 2018 the Italian mathematician Alessio Figalli won a Fields Medal for his work in optimal transport and PDEs. 

This PhD project focusses on an important class of optimal transport problems known as semi-discrete transport problems, which have recently found applications in weather modelling [1], pattern formation [2], microstructure modelling [3], optics, and fluid mechanics. In this project we will explore further applications and develop novel numerical methods. 

References: 

[1] Bourne, D.P., Egan, C.P., Pelloni, B. & Wilkinson, M. (2022) Semi-discrete optimal transport methods for the semi-geostrophic equations, Calculus of Variations and Partial Differential Equations, 61:39. 

[2] Bourne, D.P. & Cristoferi, R. (2021) Asymptotic optimality of the triangular lattice for a class of optimal location problems, Communications in Mathematical Physics, 387, 1549-1602. 

[3] Bourne, D.P., Kok, P.J.J., Roper, S.M. & Spanjer, W.D.T. (2020) Laguerre tessellations and polycrystalline microstructures: A fast algorithm for generating grains of given volumes, Philosophical Magazine, 100, 2677-2707. 

Elasticity methods in computer vision 

Supervisor: John Ball 

Description: The project concerns the comparison of images by minimizing a functional depending on a map taking one image to the other and on features of the images (see [1]). Part of the functional is the same as that for nonlinear elasticity. In work with a current student some basic properties of this model, such as existence of minimizers, have been established, and conditions found under which for linearly related images the minimization delivers the corresponding linear map. The project will extend this work in several directions, in particular testing the minimization algorithm numerically, and considering the effect of adding a term depending on second derivatives of the deformation to the functional. 

The project offers a training in nonlinear analysis, especially the calculus of variations, in related numerical methods, and in nonlinear elasticity itself. 

  References:  

[1] J. M. Ball, Nonlinear elasticity and image processing, Lecture at Newton Institute, https://talks.cam.ac.uk/talk/index/96634 

Equilibrium of liquid crystals in exterior domains 

Supervisor: John Ball 

Description: The aim of the project is to investigate the equilibrium configurations of nematic liquid crystals in the 3D region outside a finite number of bounded open sets $W_i$, according to the Oseen-Frank theory whose state variable is a unit vector field giving the mean orientation of the rod-like molecules forming the liquid crystal. This problem was studied in 2D in [1], but the 2D theory has a different flavour in that equilibria are smooth, while in 3D they can have singularities (see [2]) such as point defects. Currently there is much interest in liquid crystal colloids, in which the $W_i$ are particles that can move, and the proposed project has possible developments for the study of such dynamical situations. 

The project offers training in modern techniques of nonlinear partial differential equations and the calculus of variations adaptable to other situations. 

References: 

[1] Lu Liu, The Oseen-Frank theory of liquid crystals, Thesis, Oxford, 2019. 

[2] Haïm Brezis, Jean-Michel Coron, and Elliott H Lieb. Harmonic maps with defects. Communications in Mathematical Physics, 107(4):649–705, 1986.   

  The Laplacian eigenvalues on regions with symmetries.  

Supervisors: Lyonell Boulton and Beatrice Pelloni 

Description: The eigenvalues of the Laplacian on a rectangle can easily be found in terms of trigonometric functions. On an ellipse they can be found in terms of Bessel functions. From these two examples, we might be tempted to assume that then it is possible to find the eigenvalues on other simple regions (for example by arguments involving symmetry). Indeed they can be computed exactly on a straight isosceles triangle in terms of those on the square. In general, however, this is not true. Even for a generic triangle (the simplest possible 2D region), it is not true in that we can always find a close expression for the smallest eigenvalue or the corresponding eigenfunction.  

Regions for which we know the exact eigenvalues include the equilateral triangle. A list of these was first computed by Gabriel Lamé in the 19th Century and the arguments for the calculation are very sophisticated, involving a tessellation of the plane by means of parallelepipeds and studying various symmetry groups. It is remarkable that it was only in 1985, that the first full proof that Lamé’s list was complete was found [1].  

Recently, a new approach by Fokas and Kalimeris [3], seems to provided an effective mechanism to compute the full list of eigenvalues for the equilateral triangles with Dirichlet, Neumann and other natural boundary conditions. This techinique appears to be very promising and it is yet to be tested on other regions. 

This PhD project will begin by analysing and comparing the proofs made in [1] and [3], alongside with a simpler proof found in [2], that Lamé's list of eigenvalues on the triangle is complete. Then, we will move onto investigating the following problem. Can we compute explicitly the eigenvalues on a regular hexagon? Then, what about other regular polygons in general? For this, the case of mixed boundary conditions (Dirichlet and Neumann) on triangles might be a natural line of enquiry.  

See [4] for a full list of references on the subject. 

References: 

[1] M. Pinsky, SIAM J. Math. Anal. 16 (1985) 848-851. 

[2] B.J. McCartin, SIAM Rev. 45 (2003), 267-287. 

[3] T. Fokas and M. Kalimeris, Comp. Methods and Funct. Theo. 14 (2014) 10-33. 

[4] D.S. Grebenkov and B.-T. Nguyen, SIAM Rev. 55 (2013) 601-667. 

One-parameter Semigroups on Metric Graphs 

Supervisor: Lyonell Boulton 

Description: The purpose of this project is to study the time-evolution equation associated to linear differential operators on a graph. We assume that the edges of the graph (of a certain length) are segments and that suitable regularity conditions and boundary conditions are fixed on the nodes. 

According to the seminal work of Kramar-Fijavz, in the case of the Laplacian we know necessary and sufficient conditions at the nodes, for the time evolution problem to have a solution for any initial condition which is square integrable - in the Hilbert space $L^2$. This is a universal existence result. The main purpose of this project is to extend the work of Kramar-Fijavz following three main leads. 

1- Classify the family of initial conditions for which the evolution problem still has a solution, despite of exhibiting non-universal existence. 

2- Consider more general operators, such as those of Sturm-Liouville-type. 

3- Consider the more general case of Banach spaces $L^p$. 

The project itself will ensure training in the state-of-the-art of non-self-adjoint spectral theory and the theory of one-parameter semigroups on graphs. 

The s-numbers of higher order Sobolev embeddings 

Supervisor: Lyonell Boulton 

Description: The theory of embeddings between function spaces began with the need to prove existence and uniqueness results about the solution of partial differential equations.  The theory florished as the demand for solving more complicated equations drew a need to suply sharper inequalities. This symbiosis is illustrated by the classical Sobolev inequality discovered in the 1930s. In many applications it is enough to show existence of the constant, but for some other applications the precise value of the minimal constant is required. One example of this is in the theory surounding the so-called Euclidean isoperimetric inequality.  

The optimal constant for the first order Sobolev inequality on a segment with zero boundary conditions was computed by Talenti in 1976. Since then, rather little has been studied about the case of Sobolev spaces of higher order. An exception to this is the case of second order embeddings examined recently in [1]. The purpose of this project will be a thorough investigation of properties of the optimal constant, and the so-called singular numbers for higher order embeddings.  

References: 

[1] Arxiv 2204.04703 

Hierarchical Methods for Stochastic Partial Differential Equations

Description: Partial Differential Equations (PDEs) are important versatile tools for modelling various phenomena, like fluid dynamics, thermodynamics, nuclear waste, etc... Stochastic Partial Differential Equations (SPDEs) generalize PDEs by introducing random parameters or forcing. One is then interested in quantifying the uncertainty of outputs of such models through the computations of various statistics. Accurate computations of such statistics can be costly as it requires fine time- and space-discretization to satisfy accuracy requirements. Several hierarchical methods were developed to address such issues and applied successfully to Stochastic Differential Equations (SDEs) and in this project we will extend these works to deal with the more complicated SPDEs.

Supervisor: Abdul-Lateef Haji-Ali

Numerical Methods for Financial Market Models

Description: Many models for the evolution of financial and economic variables, for example interest rates, inflation rates, advanced models of stock prices, have no known closed-form analytical solution. To be able to work with these models, for example to value financial derivative products and their risk management, it is of fundamental importance to design numerical methods that are accurate, efficient, and easily and efficiently adaptable to changing market conditions. Financial derivatives are ubiquitous, they are embedded in many standard financial and insurance products and play an important role themselves in the risk management of companies. In this project, we will apply methods from stochastic analysis and probability theory to models of financial markets to enhance the understanding of their stochastic properties, and to design high-quality fast methods for their numerical treatment.

Supervisor: Anke Wiese

An Optimisation View of Deep Learning Methods for Data Science

Description: Data science transforms data into interpretable information to enable accurate decision-making. Such methods rely on advanced mathematical tools. Optimisation is one of them, and it is broadly used to design robust, fast, and scalable algorithms to minimise given objective functions. Since early 2000’s, proximal methods have become state-of-the-art to solve minimisation problems, in particular in the context of inverse problems. During the last decade, proximal algorithms involving neural networks (NNs) have emerged. Two main classes of such hybrid methods can be distinguished. The first approach consists in unrolling an optimisation algorithm over a fixed number of iterations to build the layers of a NN, leading to unfolded NNs. Unfolded NNs are particular instances of end-to-end NNs, that are directly used to solve inverse problems, processing corrupted data to produce a corrected output. The second approach relies on replacing the denoising steps of an optimisation algorithm by NNs, leading to PnP algorithms.

Research related to optimisation-based NNs for inverse imaging problems is relatively recent, and devoted methods are evolving fast. Some of the challenges of interest in this field are related to (i) theoretical understanding, (ii) design of new methods (including optimisation algorithms, sampling methods, NNs, etc.), (iii) applications (e.g., medical, astronomical, photon imaging).

Possible research directions for this project include (but are not restricted to):

(1) Theoretical guarantees of hybrid optimisation-NN methods: Although convergence of PnP algorithms has started to be better understood recently, many questions remain unanswered (or partially answered). For instance, to unsure convergence of PnP algorithms, NNs must satisfy some technical conditions. How to build such NNs? Similarly, when unrolling a fixed number of iterations of an optimisation algorithm, all the theoretical guarantees are lost. What type of guarantees do unfolded NNs offer?

(2) Building flexible NNs for inverse problems: In PnP methods, the involved NNs depend on the underlying statistical models (e.g., higher noise level on the measurements requires stronger denoisers). Hence different NNs must be trained depending on the inverse problem' statistical model, which is computationally prohibitive. How to build more flexible NNs that can be adapted to multiple statistical models?

(3) Improve NN efficiency using optimisation: Optimisation methods benefit from numerous acceleration strategies. Can (unfolded) NNs benefit from such acceleration techniques to design more powerful networks?

Supervisors: Audrey Repetti and Prof. Jean-Christophe Pesquet (University of Paris-Saclay, CentraleSupelec)

Interacting particle systems and SPDEs of McKean-Vlasov type

Supervisor: Michela Ottobre 

Description: The study of Interacting Particle Systems (IPSs) and related kinetic equations has attracted the interest of the mathematics and physics communities for decades. Such interest is kept alive by the continuous successes of this framework in modelling  a vast range of phenomena, in diverse fields such as biology, social sciences, control engineering, economics, game theory, statistical sampling and simulation, neural networks etc.

When the number of agents/particles in the system is very large the dynamics of the full Particle System (PS) can be rather complex and expensive to simulate; moreover, one is quite often more interested in the collective behaviour of the system rather than in its detailed description. In this context, the established methodology in statistical mechanics and stochastic analysis is to look for simplified models that retain relevant characteristics of the original PS by letting the number N of particles to infinity; the resulting limiting equation for the density of particles is, typically,  a low dimensional, (in contrast with the initial high dimensional PS) non-linear partial differential equation (PDE), where the non-linearity has a specific structure, commonly referred to as a McKean-Vlasov nonlinearity. We will consider PS where the dynamics of each particle is described by a Stochastic Differential Equation (SDE). When N tends to infinity, depending on the specific nature of the noise, one can obtain either a PDE for the particle density (i.e. a deterministic equation, which is the classical setting) or a Stochastic PDE. It is well known that, even in the classical case in which the limit is a PDE, the particle system and the PDE can have very different properties, raising questions about whether the PDE is a good approximation of the initial PS. The case in which the limit is stochastic is by far less investigated and this is the regime on which this project will focus. 

Analysis of Multiscale problems: stochastic averaging and homogenization

Supervisor: Michela Ottobre

Description:  Many systems of interest in the applied sciences share the common feature of possessing multiple scales, either in time or in space, or both. Some approaches to modelling focus on one scale and incorporate the effect of other scales (e.g. smaller scales) through empirical constitutive relations. Multiscale modelling approaches are built on the ambition of treating both scales at the same time, with the aim of deriving (rather than empirically obtaining) efficient coarse grained (CG) models which incorporate the effects of the smaller/faster scales. Obtaining such CG descriptions in a principled way helps one strike a compromise between microscopic models, accurate but computationally expensive, and macroscopic models, which are less accurate but simpler. Multiscale methods play a fundamental role across science, providing both underpinning for numerics/simulation algorithms and modelling paradigms in an impressive range of fields, such as engineering, material science, mathematical biology, social sciences, climate modelling (notably playing a central role in Hasselmann's programme, where climate/ whether are seen as slow/fast dynamics, respectively), to mention just a few. 

This project is primarily concerned with the study of stochastic systems which possess multiple time-scales and that are modelled by stochastic differential equations (SDEs). In their simplest form, the systems we consider are made of two components, commonly referred to as the fast and slow scale.  In this case, assuming the fast process (FP) evolves towards a (unique) equilibrium,  so called equilibrium measure (EM), established methodologies allow one to obtain an effective dynamics by substantially ‘replacing’ the FP with its behaviour in equilibrium (intuitively,  due to the large time-scale separation, the FP will immediately reach its equilibrium state).  In this context, the method of multiscale expansions provides a way to formally derive the CG dynamics, while stochastic averaging (and homogenization) techniques provide analytical tools for rigorous proofs.

When using any of these techniques a key assumption is that the dynamics for the FP (more precisely, the so-called ‘frozen process’) has a unique EM (that is, that it is ergodic). Without such an assumption even formal multiscale expansions seem to be no longer useful.  Nonetheless many systems in the applied sciences do exhibit multiple equilibria. In particular, when the fast process has multiple EMs, the procedure we have informally described above can no longer be used as is, and even producing an ansatz for the reduced description of the dynamics becomes nontrivial. This is the starting point for this project.

How to apply

If you would like to know more about any of the PhD projects listed above, please contact the relevant supervisor directly, we are always happy to have an informal chat.

We actively promote equality, diversity and inclusion and welcome applications from all qualified applicants.

For informal enquiries about the PhD programme get in touch with:

  • Daniel Coutand (Mathematical Sciences Admissions Officer) for projects in Mathematical Sciences.
  • Lucia Scardia (Actuarial Mathematics and Statistics Admissions Officer) for projects in Actuarial Science and Statistics.
  • Simon Malham (PhD Programme Director).

You can find full details of how to apply by visiting the Maxwell Institute Graduate School (MIGS) website.

For further useful support on how to get a PhD, we strongly encourage you to look at the Piscopia Initiative website (Piscopia is as student led initiative aimed at helping female and non-binary applicants through the PhD application process - this has been a hugely successful initiative, so please get in touch with the organisers, they are incredibly friendly and helpful).