.Hypothesis 1.1
Observation of a prosthesis executing a motor task will result in less sustained neural activity in cortical areas for grasp and goal recognition than will observing an intact hand performing the same actions.
Hypothesis 1.2
The effect of target object somatosensory properties on neural areas responsible for somatosensory property assessment will be modulated by the end-effector in an observed task.
Aim 2
Determine the neurobehavioral effects of modulation of visual and somatosensory feedback during a reach-and-grasp task.
Subjects will be divided into vision or occluded vision groups, and will execute a reach-and-grasp task while wearing an upper-extremity prosthetic limb or while using a tool. Prosthetic limb subjects receive vibrotactile feedback for the first or last half of the trials. The neurobehavioral effects of prosthesis use, tool use, augmented feedback, and modulation of visual feedback during the task will be assessed.
Hypothesis 2.1
Alpha frequency event-related desynchronization in somatosensory areas will be increased by reduced visual feedback and decreased by augmented somatosensory feedback.
Hypothesis 2.2
Event-related desynchronization of frontal theta power will increase in the occluded vision condition.
Hypothesis 2.3
Decreased availability of visual or somatosensory feedback will decrease grasp aperture precision.
Hypothesis 2.4
Use of a tool rather than a prosthesis will result in smaller changes in neural activity in premotor areas, while occluded vision during tool use will result in increased neural activity in parietal areas.
Aim 3
How do contact events change neurobehavioral outcomes of prosthesis use \citep{Flanagan2006}?
Hypothesis 3.1
Simulated contact events during prosthesis use...
Hypothesis 3.2
...also...
Background
Cortical areas have characteristic rhythms, or frequencies, at which the neural pools communicate with other cortical or sub-cortical areas. For example, the mu rhythm, frequencies from approximately 8 - 13 Hz, are associated with neural activity involved in sensory integration. When requirements for sensory integration are low, these rhythms become synchronized, resulting in higher power over cortical areas. When demand increases, neural activity becomes desynchronized, quantified as a decrease in power, and called event-related desynchronization (ERD). ERD is not specific to mu frequencies, and is found in beta (13 - 30 Hz, associated with motor movement), theta (4 - 7 Hz, associated with attentional demand), and other frequency ranges. ERD thus provides insight into which cortical areas are active, and their level of activity.
Another method of measuring neural activity is functional magnetic resonance imaging (fMRI). Increases in neural activity result in more demand for oxygen. The cardiovascular system accommodates these increases by increasing regional cerebral blood flow \citep{Logothetis2002}. Deoxygenated blood … magnetic properties …
Blood-oxygen-level-dependent (BOLD) fMRI provides an indirect measure of neural activity \citep{Lee2010}.
Brain networks process visual, somatosensory, and goal information to create motor actions that facilitate accomplishing a task goal. Often, these task goals involve an interaction between the hand or end-effector and an object. Putative networks and brain areas for these processes were characterized by \citet{Fagg1998}, and have been further refined \citep{Rizzolatti2001}. The processing neural areas perform includes object recognition, object shape determination, grasp selection, and reach selection.
Recognition of objects has been shown to involve the lateral occipital cortex (LOC) \citep{Grill-Spector1999}. The anterior LOC is sensitive to object shape, while the posterior LOC is sensitive to object transformations. In addition to physical shape, other properties of objects have been found to be represented in cortical areas. For example, work by \citet{Gallivan2014} found that object weight is represented in the occipitotemporal cortex (OTC). By lifting objects, object weight may become associated with surface properties of the objects. This association also occurs in the OTC.
Visual processing of the target object surfaces into grasp representations proceeds as described by \citep{Rizzolatti2001}. Visual information flows along two major pathways, the dorsal “where/how” pathway, and the ventral “what” pathway. Considering the dorsal stream first, visual information arrives at parietal areas where reach and grasp selection occur. Reach selection occurs in the posterior parietal cortex, specifically in areas round the intraparietal sulcus, superior parietal lobule, and precuneus \citep{Inouchi2013}. Simultaneously, visual input is conveyed to the anterior intraparietal sulcus (AIP) where 3-dimensional object features are extracted, the relevant features are determined, and a grasp is selected. In addition to AIP, grasps are also coded for by the LOC, middle intraparietal sulcus (MIP), and inferior frontal gyrus (IFG) \citep{Hamilton2007}.
Area F5 activates a motor prototype based on grasp information from AIP and relays the information to F6. F6 determines when the move will proceed, at which point the motor prototype is sent to F1/M1 for execution.
Concurrent with dorsal stream processing, visual information also proceeds along the ventral stream. Object information arrives at the inferior temporal lobe (IT) where object meaning is decoded. This information becomes input to the dorsolateral pre-frontal cortex (DLPFC) for integration with the current task goals. Along with information on motivation, information from the DLPFC is conveyed to area F6.
After movement commences and during the reach, the hand is preshaped to facilitate object interaction based on the goals of the task \citep{Rizzolatti2001} and the visual properties of the object \citep{Winges2003}, as determined by prior ventral and dorsal stream processing.
Motor control helps ensure successful execution of the created motor plan. The challenges of motor control, even in intact people, are increased by the delayed, unreliable, noisy feedback of sensory systems \citep{Izawa2008a}. For example, the time elapsed between a tactile cue at the fingertip and a change of the finger's action is approximately \SI{100}{\ms}, while a visual cue requires approximately \SI{200}{\ms} to elicit a change in fingertip action \citep{Johansson2009a}. Based on these delays and the noise induced by the motor system itself, sensorimotor systems perform weighting of available information to select the best representation of the current state of the body and environment \citep{Blouin2014}. Decreased reliability of somatosensory or visual information has been shown to result in increased ERD in premotor and parietal areas, respectively. While a large change in reliability of visual and somatosensory information has been found to result in a change in neural substrate recruited for processing. Processing shifts from frontal/parietal (mu rhythms) to anterior frontal higher cognitive areas (theta rhythms), and these shifts between areas are accompanied by ERD in the areas involved \citep{Mizelle2016}. Changes in availability of sensory information has also been shown to result in a shift from closed-loop, feedback, reactive control to open-loop, feedforward, predictive control \citep{Macuga2014}.
By using a well-characterized motor task and modulating sensory availability during the task, it is possible to learn the sensory involvement in the sensorimotor acts of creating, executing, and recognizing the task. Reach-to-grasp is one such well-characterized task. Stereotypical reach-to-grasp kinematics have been assessed by \citep{Jeannerod1984}, \citep{Wing1986}, and others. In their kinematic assessments, the researchers determined that grasp aperture evolves from a minimal rest position to a maximum that facilitates object interaction at approximately $\frac{2}{3}$ of the reach distance, and decreases over the final $\frac{1}{3}$ of the reach until contact is made with the object \citep{Jeannerod1984}. While performing a reach-to-grasp task, reach and grasp actions are coordinated \citep{Coats2008,Todorov2004a,Zackowski2002}, wherein perturbation of one affects the other \citep{Wing1986}.
Reach-to-grasp tasks are an everyday task performed without conscious thought for most people, but the demands of sensory integration and motor control during these tasks means that changes in sensory availability can result in quantifiable behavioral changes, as well as changes in neural activity. Similarly, changes in perceived sensory availability and end-effector familiarity when watching reach-to-grasp actions performed by another person, with or without a prosthetic device, can reveal sensory contributions to action recognition.
Study 1
Question 1
How does the end-effector affect neural processing of observed motor tasks?
Aim 1
Determine the contributions of end-effector and target object properties to recognition of attributes of motor actions as performed by an intact hand or prosthetic limb.
Approach Rationale
Repeated presentation of stimulus leads to repetition suppression in areas relevant to the presented stimulus. By presenting repeated and non-repeated stimuli, the neural activation differences between the stimuli may be determined.
In this study, intact and prosthesis are presented in repeated and non-repeated conditions and sustained neural activation is compared. Significance in second level analysis will highlight areas whose activations differ between intact and prosthesis, and thus process intact and prosthesis differently. It is anticipated that areas relevant to the sensory differences in the two end effectors will include the superior parietal lobule, an area of sensory integration. TODO1
We also hypothesize diminished or absent activation in grasp-specific cortical regions \citep{Hamilton2007} in prosthesis vs. intact hand.
The difference could be due to the absence of an internal model for the prosthetic device, which the participants are naïve to.
Based on previous research describing cortical areas where object properties are represented \citep{Gallivan2014}, we hypothesize differences in sustained neural activity based on the sensory properties of target objects in the videos. We hypothesize that these activation differences will be diminished when the end-effector shown in the videos is the prosthetic device relative to activation during observation of videos using an intact hand \cite{Gazzola_2008,Buccino_2001a,Kilner_2011,Friston_2011}.
Hypothesis 1.1
Observation of a prosthesis executing a motor task will result in sustained bilateral neural activity in parietofrontal networks relative to observing an intact hand performing the same actions.
Hypothesis 1.2
The effect of target object sensory properties on sensory neural areas will be modulated by the end-effector used in an observed task.
Methods
Subjects
Fifteen (10 female) right-handed neurologically healthy adults participated in the study. All subjects provided written, informed consent, and all methods were approved by the Georgia Institute of Technology Institutional Review Board. Subjects read a list of contraindications for MRI and indicated their lack of contraindications. Subjects completed an MRI screening questionnaire and indicated 'no' to all pathologies. Subjects completed a health questionnaire which included assessment of handedness, eyesight, language/education, general health, and details of age, gender, date of birth, etc.
Experimental Measures
Functional Magnetic Resonance Imaging (fMRI)
Participants will lie supine in a Trio 3-Tesla MRI machine (Siemens Medical Solutions USA, Inc., Malvern, PA, USA) fitted with a 12-channel headcoil. Participants will view text and video stimuli projected using an Silent Vision 6011 projector (Avotec, Inc., Stuart, FL, USA). Stimuli will be generated by PsychoPy 2 software (University of Nottingham, University Park, Nottingham, UK).
MRI parameters: 37 slices, \SI{3.0}{\mm}, \SI{0.3}{\mm} gap, \SI{90}{\degree} flip angle, TR=\SI{2}{\s}, TE=\SI{30}{\ms}, FOV=$204\times204\times204$\,mm, $68\times68$ matrix, 238 measurements. Four runs for a total of 952 volumes.
Baseline reference for BOLD measurement was the last \SI{4}{\s} of the fixation period following the video presentation.
Experimental Design
A repetition suppression (RS) paradigm was created based on previous work by \citet{Thioux2015}. Stimuli were presented in four \SI{7}{\minute} runs, with each run consisting of the presentation of 20 stimuli sets. Each set was composed of a fixation cross (\SIrange{8}{12}{\s}), a ready prompt (\SI{2}{\s}), a task prompt (\SI{3}{\s}), and four videos of \SI{2}{\s} each (see \cref{fig:aim2stimuli}). Task prompt text was salient to the object and task which was to be presented, for example, ``Clean the plate'' was displayed before video of reaching for a plate. In order to assess the effects of repetition suppression on brain areas, sets of video consisted of the same video repeated four times, or of two different videos repeated twice. Alternating videos had either a different end-effector (e.g. intact and prosthesis), or different object with similar semantic meaning (e.g. paper plate and stoneware plate). The various stimuli sets were grouped into contrasts for statistical analysis of BOLD and RS effects. The contrasts were: intact same object, intact alternating object, prosthesis same object, prosthesis alternating object, and alternating intact / prosthesis with the same object.