Behavior
Triplet odd-one-out similarity judgments (~4.7M) over THINGS images, plus an semantic embedding model capturing interpreable dimensions underlying perceived objects similarity.
A multimodal, large-scale collection for studying natural object representations in human brain and behavior.
All data are sampled from responses to the same stimulus set: the THINGS database of 1,854 object concepts and 26,107 natural images. THINGS-data links fMRI, MEG, and behavioral similarity judgments measured with these stimuli.
All data are openly available via Figshare, OpenNeuro, and OSF, with ready-to-use derivatives and analysis code.
Every component uses the same THINGS stimulus set (object concepts and images). Data are openly available: preprocessed fMRI and MEG derivatives via the Figshare collection, raw fMRI and MEG in BIDS format on OpenNeuro, and behavioral similarities and embeddings on OSF.
Triplet odd-one-out similarity judgments (~4.7M) over THINGS images, plus an semantic embedding model capturing interpreable dimensions underlying perceived objects similarity.
High-resolution functional magnetic resonance imaging (fMRI) with dense sampling per participant during natural object viewing. Three participants, each saw > 8,000 images of 720 objects.
Time-resolved magnetoencephalographic (MEG) recordings during object viewing.
A few concrete example use cases from Teichmann, Contier, and colleagues.
The main THINGS-data paper (Hebart, Contier, Teichmann et al., 2023) shows how to (i) replicate classic findings on the representation of object animacy and size in the human brain, (ii) link behavioral embedding to cortical response, and (iii) integrate object responses resolved in time and space.
Using THINGS MEG in “Dynamic representation of multidimensional object properties in the human brain” to model the temporal dynamics of neural signals reflecting human perception of objects.
Using the behavioral and fMRI parts of THINGS-data, this paper “Distributed representations of behavior-derived object dimensions in the human visual system” mapped interpretable behavioral dimensions onto cortical response patterns, revealing human understanding of objects is distributed across the visual system.
If you use THINGS-data in your work, please cite the main paper:
Hebart MN, Contier O, Teichmann L, Rockter AH, Zheng CY, Kidder A, Corriveau A, Vaziri-Pashkam M, Baker CI (2023). THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior. eLife 12:e82580. https://doi.org/10.7554/eLife.82580.
@article{THINGSdata,
title = {THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior},
author = {Hebart, Martin N and Contier, Oliver and Teichmann, Lina and Rockter, Adam H and Zheng, Charles Y and Kidder, Alexis and Corriveau, Anna and Vaziri-Pashkam, Maryam and Baker, Chris I},
journal = {eLife},
volume = {12},
pages = {e82580},
year = {2023},
doi = {10.7554/eLife.82580},
url = {https://doi.org/10.7554/eLife.82580}
}
This GitHub repository contains the analysis code used for the neuroimaging results in the THINGS-data paper. It focuses on fMRI and MEG; raw data live on OpenNeuro, and derivatives on Figshare. Source code: github.com/vicco-group/THINGS-data
The MRI/ folder contains Python modules, scripts,
and notebooks for working with THINGS-fMRI:
ThingsMRIdataset and related utilities to
navigate BIDS-formatted data.
ThingsmriLoader for loading single-trial betas,
voxel metadata, and masks.
glm.py, animacy_size.ipynb).
See MRI/README.md and example notebooks in
MRI/notebooks/ for usage.
The MEG/ folder provides a stepwise pipeline for
preprocessing, quality control, validation analyses, and
decoding/RSA using MEG and eyetracking data.
See MEG/README_MEG.txt for a description of the
pipeline and dependencies.