Principled and Scalable Methods for High Dimensional Graphical Model Selection

Event Date: 

Wednesday, March 11, 2015 - 3:30pm to 5:00pm

Event Date Details: 

Refreshments served at 3:15 PM

Event Location: 

  • South Hall 5607F

Dr. Sang-Yun Oh (Lawrence Berkeley National Lab)

Title: Principled and Scalable Methods for High Dimensional Graphical Model Selection

Abstract:Learning high dimensional graphical models is a topic of contemporary interest. A popular approach is to use L1 regularization methods to induce sparsity in the inverse covariance estimator, leading to sparse partial covariance/correlation graphs. Such approaches can be grouped into two classes: (1) regularized likelihood methods and (2) regularized regression-based, or pseudo-likelihood, methods. Regression based methods have the distinct advantage that they do not explicitly assume Gaussianity. One major gap in the area is that none of the popular approaches proposed for solving regression based objective functions guarantee the existence of a solution. Hence it is not clear if resulting estimators actually yield correct partial correlation/partial covariance graphs. To this end, we propose a new regression based graphical model selection method that is both tractable and has provable convergence guarantees, leading to well-defined estimators. In addition, we demonstrate that our approach yields estimators that have good large sample properties and computational complexity. The methodology is illustrated on both real and simulated data. We also present a novel unifying framework that places various pseudo-likelihood graphical model selection methods as special cases of a more general formulation, leading to important insights. (Joint work with Bala Rajaratnam and Kshitij Khare)