Branching Time Active Inference with Bayesian Filtering

Théophile Champion, Marek Grześ, Howard Bowman

Research output: Contribution to journalArticlepeer-review

1 Downloads (Pure)

Abstract

Branching time active inference is a framework proposing to look at planning as a form of Bayesian model expansion. Its root can be found in active inference, a neuroscientific framework widely used for brain modeling, as well as in Monte Carlo tree search, a method broadly applied in the reinforcement learning literature. Up to now, the inference of the latent variables was carried out by taking advantage of the flexibility offered by variational message passing, an iterative process that can be understood as sending messages along the edges of a factor graph. In this letter, we harness the efficiency of an alternative method for inference, Bayesian filtering, which does not require the iteration of the update equations until convergence of the variational free energy. Instead, this scheme alternates between two phases: integration of evidence and prediction of future states. Both phases can be performed efficiently, and this provides a forty times speedup over the state of the art.
Original languageEnglish
Pages (from-to)2132–2144
Number of pages12
JournalNeural Computation
Volume34
Issue number10
DOIs
Publication statusPublished - 12 Sept 2022

Fingerprint

Dive into the research topics of 'Branching Time Active Inference with Bayesian Filtering'. Together they form a unique fingerprint.

Cite this