Visual encoding of tool-object interactions
Gale, Mary Kate
MetadataShow full item record
Tools and objects are associated with numerous action possibilities that are reduced depending on the task-related internal and external constraints presented to the observer. Action hierarchies propose that goals represent higher levels of the hierarchy while kinematic patterns represent lower levels of the hierarchy. Prior work suggests that tool-object perception is heavily influenced by grasp and action context. The current study sought to evaluate whether the presence of action hierarchy can be perceptually identified using eye tracking during tool-object observation. We hypothesize that gaze patterns will reveal a perceptual hierarchy based on the observed task context and grasp constraints. Participants viewed tool-objects scenes with two types of constraints: task-context and grasp constraints. Task-context constraints consisted of correct (e.g., frying pan-spatula) and incorrect tool-object pairings (e.g., stapler-spatula). Grasp constraints involved modified tool orientations, which requires participants to understand how initially awkward grasp postures can help achieve the task. The visual scene contained three areas of interests (AOIs): the object, the functional tool-end (e.g., spoon handle), and the manipulative tool-end (e.g., spoon bowl). Results revealed two distinct processes based on stimuli constraints. Goal-oriented encoding, the attentional bias towards the object and manipulative tool-end, was demonstrated when grasp did not lead to meaningful tool-use. In images where grasp postures were critical to action performance, attentional bias was primarily between the object and functional tool-end, which suggests means-related encoding of the graspable properties of the object. This study expands from previous work and demonstrates a flexible constraint hierarchy depending on the observed task constraints.