Hand-selective visual regions represent how to grasp 3D tools for use: brain decoding during real actions

Published: Oct. 15, 2020, 4:02 p.m.

Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.10.14.339606v1?rss=1 Authors: Knights, E., Mansfield, C., Tonin, D., Saada, J., Smith, F., Rossit, S. Abstract: Most neuroimaging experiments that investigate how tools and their associated actions are represented in the brain use visual paradigms where objects and body parts are displayed as 2D images and no real movements are performed. These studies have discovered a tight relationship between hand- and tool-selective areas in LOTC and IPS, thought to reflect action-related processing but this claim has never been directly investigated. Here we addressed this by testing whether independently visually-defined category-selective areas were sensitive to real action properties involving 3D tools. Specifically, using multi-voxel pattern analysis (MVPA), we tested if brain activity patterns would differ depending on whether grasping was consistent or inconsistent with how tools are typically grasped for use (e.g. grasp knife by handle rather than by its serrated edge). In a block-design fMRI paradigm, participants grasped the left or right sides of 3D tools (kitchen utensils) and 3D non-tools (bar-shaped objects) with the right-hand. Importantly, and unknown to participants, by varying movement direction (right/left) the tool grasps were performed in either a typical (by the handle) or atypical (by the functional-end) manner. We found that representations about whether a 3D tool is being grasped appropriately for use were decodable from hand-selective areas (LOTC-Hand and IPS-Hand), but not from tool-, object-, or body-selective areas, even if partially overlapping. These findings indicate that representations of how to grasp tools for use are automatically evoked in visual regions specialised for representing the human hand. Copy rights belong to original authors. Visit the link for more info