DSPRelated.com
Books

Probabilistic Graphical Models: Principles and Techniques (Adaptive Computation and Machine Learning series)

Koller, Daphne, Friedman, Nir 2009

Most tasks require a person or an automated system to reason--to reach conclusions based on available information. The framework of probabilistic graphical models, presented in this book, provides a general approach for this task. The approach is model-based, allowing interpretable models to be constructed and then manipulated by reasoning algorithms. These models can also be learned automatically from data, allowing the approach to be used in cases where manually constructing a model is difficult or even impossible. Because uncertainty is an inescapable aspect of most real-world applications, the book focuses on probabilistic models, which make the uncertainty explicit and provide models that are more faithful to reality. Probabilistic Graphical Models discusses a variety of models, spanning Bayesian networks, undirected Markov networks, discrete and continuous models, and extensions to deal with dynamical systems and relational data. For each class of models, the text describes the three fundamental cornerstones: representation, inference, and learning, presenting both basic concepts and advanced techniques. Finally, the book considers the use of the proposed framework for causal reasoning and decision making under uncertainty. The main text in each chapter provides the detailed technical development of the key ideas. Most chapters also include boxes with additional material: skill boxes, which describe techniques; case study boxes, which discuss empirical cases related to the approach described in the text, including applications in computer vision, robotics, natural language understanding, and computational biology; and concept boxes, which present significant concepts drawn from the material in the chapter. Instructors (and readers) can group chapters in various combinations, from core topics to more technically advanced material, to suit their particular needs.


Why Read This Book

You should read this book if you need a rigorous, unified treatment of probabilistic models and inference techniques so you can build interpretable, data-driven signal processing systems. It teaches both exact and scalable approximate inference and structure/parameter learning methods that you can apply to speech, radar, communications and other statistical DSP problems.

Who Will Benefit

Signal-processing engineers and researchers (graduate level) who design probabilistic models or integrate Bayesian inference and learning into audio, speech, radar or communications systems.

Level: Advanced — Prerequisites: Solid probability and statistics, linear algebra, multivariable calculus; familiarity with basic signal-processing concepts or estimation theory is helpful.

Get This Book

Key Takeaways

  • Formulate complex systems as probabilistic graphical models (Bayesian networks, Markov networks, dynamic models).
  • Perform exact inference using variable elimination and junction-tree (clique-tree) algorithms.
  • Apply and implement approximate inference: sampling (MCMC, importance sampling) and variational methods.
  • Learn parameters and structures from data using maximum likelihood, Bayesian estimation and EM.
  • Model time-series with dynamic Bayesian networks, relate DBNs to Kalman filters and particle filters.
  • Design and evaluate interpretable probabilistic models for real-world signal-processing tasks including model selection and regularization.

Topics Covered

  1. Introduction and probabilistic foundations
  2. Representing joint distributions: Bayesian networks
  3. Undirected models and Markov networks
  4. Conditional independence, d-separation and model semantics
  5. Exact inference: variable elimination and clique-tree (junction tree) algorithms
  6. Approximate inference: sampling methods and importance sampling
  7. Variational inference and message-passing algorithms
  8. Parameter learning: ML, Bayesian estimation, and the EM algorithm
  9. Structure learning for graphical models
  10. Continuous variables and Gaussian graphical models
  11. Dynamic Bayesian networks, Kalman filters and particle methods
  12. Advanced topics: decision graphs, approximate structure learning, and scalability

How It Compares

More focused and exhaustive on graphical-model theory than Bishop's PRML and more concentrated on PGMs (with broader algorithmic treatment) than Murphy's Machine Learning: A Probabilistic Perspective, which is more of a single-author ML compendium with additional applied examples.

Related Books