Graph Laplacian Learning for Exponential Family Noise and Product Graphs

Event Date: 

Wednesday, November 1, 2023 - 3:30pm to 4:45pm

Event Location: 

  • Zoom
Abstract: 

Graph signal processing (GSP) has generalized classical Fourier analysis to signals lying on irregular structures such as networks. However, a common challenge in applying graph signal processing is that the underlying graph of a system is unknown. Well-established methods for Graph Laplacian learning, also known as network topology inference, optimize a graph representation, usually the graph adjacency matrix or the graph Laplacian, so that the total variation of given signals will be minimal on the learned graph. In Gaussian graphical models (GM), graph learning amounts to endowing covariance selection with the Laplacian structure. 

 

In this talk I will address two challenges in Graph Laplacian Learning. 

First, while existing methods have been developed for continuous graph signals, inferring the graph structure for other types of data, such as discrete counts or binary signal, remains underexplored. We generalize graph Laplacian learning to exponential family noise distributions, allowing for the modeling of various data types, and develop an alternating algorithm that estimates the underlying graph Laplacian as well as the unobserved smooth representation from noisy signals. In synthetic and real-world experiments, we demonstrate our approach outperforms competing Laplacian estimation methods under noise model mismatch.

The second problem is learning Cartesian product graphs under Laplacian constraints. The Cartesian graph product is a natural way for modeling higher-order conditional dependencies and is also the key for generalizing GSP to multi-way tensors. We establish statistical consistency for the penalized maximum likelihood estimation (MLE) of a Cartesian product Laplacian, and propose an efficient algorithm to solve the problem. We also extend our method for efficient joint graph learning and imputation in the presence of structural missing values. Experiments on synthetic and real-world datasets demonstrate that our method is superior to previous GSP and GM methods.
 
Short Bio: 
Gial Mishne is an assistant professor in the Halıcıoğlu Data Science Institute (HDSI) at UC San Diego, and affiliated with the ECE department, the CSE department and the Neurosciences Graduate program.  Her research interests include high-dimensional data analysis, geometric representation learning, image processing and computational neuroscience. Before joining UCSD, Dr. Mishne was a Gibbs Assistant Professor in the Applied Math program at Yale University, with Prof. Ronald Coifman's research group. She completed her PhD in 2017 at the Technion at the Faculty of Electrical Engineering under the supervision of Prof. Israel Cohen.  Dr. Mishne is a 2017 Rising Star in EECS and Emerging Scholar in Science.