Humans are very good at remembering large numbers of scenes over substantial periods of time. How good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At Test, after two weeks, observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change from the study period. Scene recognition memory was found to be similar in all experiments, regardless of study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a ‘depth of processing’ account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection during the Test phase was faster than during the Study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
Research shows that object-location binding errors can occur in VWM indicating a failure to store bound representations rather than mere forgetting (Bays et al., 2009; Pertzov et. al. 2012). Here we investigated how categorical similarity between real-world objects influences the probability of object-location binding errors. Our observers memorized three objects (image set: Konkle et. al. 2010) presented for 3 seconds and located around an invisible circumference. After a 1-second delay they had to (1) locate one of those objects on the circumference according to its original position (localization task), or (2) recognize an old object when paired with a new object (recognition task). On each trial, three encoded objects could be drawn from a same category or different categories, providing two levels of categorical similarity. For the localization task, we used the mixture model (Zhang & Luck, 2008) with swap (Bays et al., 2009) to estimate the probabilities of correct and swapped object-location conjunctions, as well as the precision of localization, and guess rate (locations are forgotten). We found that categorical similarity had no effect on localization precision and guess rate. However, the observers made more swaps when the encoded objects have been drawn from the same category. Importantly, there were no correlations between the probabilities of these binding errors and probabilities of false recognition in the recognition task, which suggests that the binding errors cannot be explained solely by poor memory for objects. Rather, remembering objects and binding them to locations appear to be partially distinct processes. We suggest that categorical similarity impairs an ability to store objects attached to their locations in VWM.
Binocular rivalry is a phenomenon of visual competition in which perception alternates between two monocular images. When two eye’s images only differ in luminance, observers may perceive shininess, a form of rivalry called binocular luster. Does dichoptic information guide attention in visual search? Wolfe and Franzel (1988) reported that rivalry could guide attention only weakly but that luster (shininess) “popped out”, producing very shallow reaction time (RT) × set size functions. In the present study, we have revisited the topic with new and improved stimuli. By using a checkerboard pattern in rivalry experiments, we found that search for rivalry can be more efficient (16msec/item) than standard, rivalrous grating (30 msec/item). The checkerboard may reduce distracting orientation signals that masked the salience of rivalry between simple orthogonal gratings. Lustrous stimuli did not pop-out when potential contrast and luminance artifacts were reduced. However, search efficiency was substantially improved when luster was added to the search target. Both rivalry and luster tasks can produce search asymmetries, as is characteristic of guiding features in search. These results suggest that interocular differences that produce rivalry or luster can guide attention but these effects are relatively weak and can be hidden by other features like luminance and orientation in visual search tasks.
The problem of consciousness is one of the core problems in the contemporary cogni- tive science. Driven by the neuroimaging boom, most researchers look for the neural correlates or signatures of consciousness and awareness in the human brain. However, we believe that the explanatory potential of the cultural-historical activity approach to this problem is far from being exhausted. We propose Cognitive Psychology of Activity research program, or the activity theory-based constructivism as an attempt to account for multiple phenomena of human awareness and attention. This approach relies upon cultural-historical psychology and the concept of mediation by Lev S. Vygotsky, activity theory and the concept of image generation by Alexey N. Leontiev, the physiology of ac- tivity and the metaphor of movement construction by Nikolai A. Bernstein, transferred to the psychology of perception as image construction by a number of Russian researchers in 1960-es, and the understanding of attention as action by evolutionary cognitive psy- chologists of 1980-es. The central concept of our approach is a concept of task, defined by Leontiev as “a goal assigned in specific circumstances”. The goal determines choice and use of available cultural means (“mediators”) consistent with the circumstances or conditions of task performance, which in turn provide for the construction of processing units allowing for more successful (“attentive”) performance and for the awareness of visual stimuli which could otherwise be missed or ignored. The perceptual task accom- plishment is controlled at several levels organized heterarchically, with possible strategic reorganizations of this system demonstrating the constructive nature of human cognition.
The study was designed to examine possibly new aspects of creative activity related to virtual environments. Online gaming interface Minecraft was used to construct (on computer screens) complex structures such as buildings from ready-made blocks. Two modes were used: individual and dyadic. Participants (N=49, 29 males and 20 females, 18 to 29 y.o., recruited on a snow-ball basis) were required to build distantly two complex structures: a ship and a house; each structure was required to be highly creative, i.e. unusual and never seen before. Creativity was estimated not by the final structure but by the number of ideas generated by the participants and produced either in practice or verbally. Each participant participated once in an individual and once in a dyadic session, the partners were selected randomly. The participants' verbal exchanges were performed via Skype; digital operations with the Minecraft interface were recorded using the FastStone Capture software package. All the ideas produced by participants were classified in accordance with the following criteria: type (conceptual, functional, selective, corrective, and intentional), level of the structure which the produced ideas referred to (the whole structure, a particular component of the structure, or an element of the structure), and the status of verbally produced ideas (implemented or unimplemented). The results show that participants produced significantly more ideas and consumed significantly less time to build the prescribed structure (a house or a ship) within the individual session compared with the dyadic session. Analysis of the implementation of ideas shows that, within the dyadic sessions, participants produced significantly less ideas which were subsequently implemented. For the most part they intensely dropped out and left unimplemented the ideas referring to the levels of either components or elements of the structure. Results also show that intentions were the only type of ideas which, being generated equally often in the individual and dyadic sessions, were more often left unimplemented in the group sessions, compared to the individual sessions.
This research tests the hypothesis that 3- and 4-year-olds can use characteristics of a social context created by adults to learn new words. One of the strategies that a child can use in multi-party conversations is to decide to whom a message (and a new word) is addressed. The ability to do so may simplify word learning situations by making the learning selective and by reducing the amount of perceived words. In the current experiment we test children's ability to learn a new word from a natural conversation when the communicative context is kept constant and when it is altered by adding a new game partner. We predicted that children will differentially interpret verbal messages containing a new word as addressed to them or to the new person, and this will affect their ability to remember the new word. Children heard a new word in one of two conditions: when a communicative context shared with an adult was kept constant and when it has changed (a new adult joined the conversation). We found that 3-year-olds could learn new words only when the communicative context was constant, but 4-year-olds could learn new words in both conditions. A control condition revealed that these findings cannot be explained by task difficulty.
The word superiority effect (Cattell, 1886) is discussed in psychology for more than a century. However, a question remains whether automatic word processing is possible without its spatial segregation. Our previous studies of letter search in large letter arrays containing words without spatial segregation revealed no difference in performance and eye movements when observers searched for letters always embedded in words, never embedded in words, or when there were no words in the array (Falikman, 2014; Falikman, Yazykov, 2015). Yet both the percentage of participants who noticed words during letter search and their subjective reports whether words made search easier or harder significantly differed for target letters within words and target letters out of words. In the current study, we used the Processes Dissociation Procedure (Jacoby, 1991) to investigate whether words are processed implicitly when observers search for letters. Two groups of participants, 40 subjects each, performed 1-minute search for 24 target letters (either Ts, always within words, or Hs, always out of words) in the same letter array of 10 pseudorandom letter strings, 60 letters each, containing 24 Russian mid-frequency nouns. After that, they filled in two identical word-stem completion forms, each containing the same 48 word beginnings (24 for words included in the array). First, the participants were instructed to use words that could appear in the search array ("inclusion test"), then – to avoid using such words ("exclusion test"). Comparison of conscious and unconscious processing probabilities revealed no difference between them (with the former not exceeding 0.09 and the latter not exceeding 0.11), no difference between the two conditions, and no interaction between the factors. This allows concluding that, despite of subjective reports, words embedded in random letter strings are mostly not processed either explicitly or implicitly during letter search, and that automatic unitization requires spatial segregation.
It was previously shown that the features of individual items retrieved from visual working memory (VWM) are systematically biased towards the mean feature of a sample set (Brady & Alvarez, 2011), suggesting hierarchical encoding in VWM. In our work, we investigated how hierarchical representations are stored over time. Observers were shown four differently oriented triangles for 200 ms and, after 1-, 4-, or 7-second delay, they had to report either one individual orientation, or the average orientation of all triangles, rotating a probe circle. Before set presentations, observers were informed that they had to remember one particular orientation, all four individual orientations, or the average orientation. Using the mixture model (Zhang & Luck, 2008), we estimated a probability of a tested representation being in VWM and its precision, as well as a systematic bias that would indicate hierarchical encoding. We found a strong bias towards the mean in the “remember four” condition, which provides evidence for hierarchical encoding in VWM. Our main result was the absence of significant changes in retaining the elements of a hierarchical representation (the mean and individual features). This supports an idea that hierarchical representations are related to encoding, rather than storing in VWM. Both fidelity and the probability of an item being in memory decrease over time. It supports "Sudden Death" and "Gradual Decay" accounts for storing hierarchical representations.