Combining video telemetry and wearable MEG for naturalistic imaging

Neuroimaging studies have typically relied on rigorously controlled experimental paradigms to probe cognition, in which movement is restricted, primitive, an afterthought or merely used to indicate a subject’s choice. Whilst powerful, these paradigms do not often resemble how we behave in everyday l...

Full description

Saved in:
Bibliographic Details
Published inImaging neuroscience (Cambridge, Mass.) Vol. 3
Main Authors O’Neill, George C., Seymour, Robert A., Mellor, Stephanie, Alexander, Nicholas A., Tierney, Tim M., Bernachot, Léa, Fahimi Hnazaee, Mansoureh, Spedden, Meaghan E., Timms, Ryan C., Bush, Daniel, Bestmann, Sven, Brookes, Matthew J., Barnes, Gareth R.
Format Journal Article
LanguageEnglish
Published 255 Main Street, 9th Floor, Cambridge, Massachusetts 02142, USA MIT Press 03.03.2025
Subjects
Online AccessGet full text
ISSN2837-6056
2837-6056
DOI10.1162/imag_a_00495

Cover

More Information
Summary:Neuroimaging studies have typically relied on rigorously controlled experimental paradigms to probe cognition, in which movement is restricted, primitive, an afterthought or merely used to indicate a subject’s choice. Whilst powerful, these paradigms do not often resemble how we behave in everyday life, so a new generation of ecologically valid experiments are being developed. Magnetoencephalography (MEG) measures neural activity by sensing extracranial magnetic fields. It has recently been transformed from a large, static imaging modality to a wearable method where participants can move freely. This makes wearable MEG systems a prime candidate for naturalistic experiments going forward. However, these experiments will also require novel methods to capture and integrate information about behaviour executed during neuroimaging, and it is not yet clear how this could be achieved. Here, we use video recordings of multi-limb dance moves, processed with open-source machine learning methods, to automatically identify time windows of interest in concurrent, wearable MEG data. In a first step, we compare a traditional, block-designed analysis of limb movements, where the times of interest are based on stimulus presentation, to an analysis pipeline based on hidden Markov model states derived from the video telemetry. Next, we show that it is possible to identify discrete modes of neuronal activity related to specific limbs and body posture by processing the participants’ choreographed movement in a dancing paradigm. This demonstrates the potential of combining video telemetry with mobile magnetoencephalography and other legacy imaging methods for future studies of complex and naturalistic behaviours.
Bibliography:2025
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2837-6056
2837-6056
DOI:10.1162/imag_a_00495