Video reconstructed from human brain activity

In this video, researchers show the results of using an fMRI machine and software analysis of a brain scan to reconstruct what a person was seeing, based on their brain activity.

The clip on the left-hand side shows the visual input that was shown to the person being scanned, while the clip on the right-hand side shows the result of the scan.

This process worked as follows:

  1. Brain activity was recorded while the subject watched several hours of movie trailers.
  2. Software was used to build regression models that translate between the shapes, edges, and movies in the known movie clips and the measured brain activity (brain activity was measured at several thousand points).
  3. Additional brain activity was recorded using a new set of movie trailers to test performance of the models.
  4. A random library of 5000 hours of YouTube video was generated, and each of these video clips were applied to the models to generate predictions of brain activity. The clips where predicted activity was closest to the real activity, was compiled into the video below.

A paper about this work is available here.

Editor's choice