Graph signal processing (GSP) has generalized classical Fourier analysis to signals lying on irregular structures such as networks. However, a common challenge in applying graph signal processing is that the underlying graph of a system is unknown. Well-established methods for Graph Laplacian learning, also known as network topology inference, optimize a graph representation, usually the graph adjacency matrix or the graph Laplacian, so that the total variation of given signals will be minimal on the learned graph. In Gaussian graphical models (GM), graph learning amounts to endowing covariance selection with the Laplacian structure.
In this talk I will address two challenges in Graph Laplacian Learning.
First, while existing methods have been developed for continuous graph signals, inferring the graph structure for other types of data, such as discrete counts or binary signal, remains underexplored. We generalize graph Laplacian learning to exponential family noise distributions, allowing for the modeling of various data types, and develop an alternating algorithm that estimates the underlying graph Laplacian as well as the unobserved smooth representation from noisy signals. In synthetic and real-world experiments, we demonstrate our approach outperforms competing Laplacian estimation methods under noise model mismatch.