Bruce
Three experiments are reported which provide evidence for the independence of effects of repetition from those of distinctiveness and semantic priming in the recognition of familiar faces. In Experiment 1, repetition priming is shown to be additive with face distinctiveness in a face familiarity decision task, where subjects make speeded familiarity decisions to a sequence of famous and unfamiliar faces. Experiment 2 examines the combined effects of distinctiveness and semantic priming. The results suggest that the effect of distinctiveness is additive with that of semantic priming. A Librarian Guidlines for Students. Experiment 3 uses a more powerful design in which effects of distinctiveness while all items were repeated three times during the course of the experiment. Effects of repetition and distinctiveness were again additive as were effects of repetition and semantic priming. Distinctiveness and semantic priming were additive at 1,000 ms SOA, though appeared to interact at 250 ms SOA. The results give further evidence for the separation of the mechanisms of semantic from repetition priming, and furthermore suggest that distinctiveness operates at a different locus from those of either of the priming mechanisms.
Human subjects are able to identify the sex of faces with very high accuracy. Using photographs of adults in which hair was concealed by a swimming cap, subjects performed with 96% accuracy. Previous work has identified a number of dimensions on which the faces of men and women differ. An attempt to combine these dimensions into a single function to classify male and female faces reliably is described. Photographs were taken of 91 male and 88 female faces in full face and profile. These were measured in several ways: (i) simple distances between key points in the pictures; (ii) ratios and angles formed between key points in the pictures; (iii) three-dimensional (3-D) distances derived by combination of full-face and profile photographs. Discriminant function analysis showed that the best discriminators were derived from simple distance measurements in the full face (85% accuracy with 12 variables) and 3-D distances (85% accuracy with 6 variables). Combining measures taken from the picture plane with those derived in 3-D produced a discriminator approaching human performance (94% accuracy with 16 variables discriminant function was compared with that of human perceivers and found to be correlated, but far from perfectly. The difficulty of deriving a reliable function to distinguish between the sexes is discussed with reference to the development of automatic face-processing programs in machine vision. It is argued that such systems will need to incorporate an understanding of the stimuli if they are to be effective.
People are remarkably accurate (approaching ceiling) at deciding whether faces are male or female, even when cues from hair style, makeup, and facial hair are minimised. Experiments designed to explore the perceptual basis of our ability to categorise the sex of faces are reported. Subjects were considerably less accurate when asked to judge the sex of three-dimensional (3-D) representations of faces obtained by laser-scanning, compared with a condition where photographs were taken with hair concealed and eyes closed. This suggests that cues from features such as eyebrows, and skin texture, play an important role in decision-making. Performance with the laser-scanned heads remained quite high with 3/4-view faces, where the 3-D shape of the face should be easiest to see, suggesting that the 3-D structure of the face is a further source of information contributing to the classification of its sex. Performance at judging the sex from photographs (with hair concealed) was disrupted if the photographs were inverted, which implies that the superficial cues contributing to the decision are not processed in a purely 'local' way. Performance was also disrupted if the faces were shown in photographic negatives, which is consistent with the use of 3-D information, since negation probably operates by disrupting the computation of shape from shading. In 3-D, the 'average' male face differs from the 'average' female face by having a more protuberant nose/brow and more prominent chin/jaw. The effects of manipulating the shapes of the noses and chins of the laser-scanned heads were assessed and significant effects of such manipulations on the apparent masculinity or femininity of the heads were revealed. It appears that our ability to make this most basic of facial categorisations may be multiply determined by a combination of 2-D, 3-D, and textural cues and their interrelationships.
When shown the faces of familiar people, subjects are typically slower and less accurate at retrieving names than other semantic information. This finding, along with converging evidence from neuropsychological studies, has influenced most theoretical accounts of face recognition (e.g. Bruce & Young, 1986). These accounts propose that names are stored separately from semantic information, and that they may not be retrieved in the absence of other information. Here we show that it is possible to account for empirical findings without positing a separate store for names. The account is based on an implemented simulation with an interactive activation and competition architecture. We demonstrate that the fact that most names are unique leads naturally to the patterns of recall found in experimental studies.
Much early work in the psychology of face processing was hampered by a failure to think carefully about task demands. Recently our understanding of the processes involved in the recognition of familiar faces has been both encapsulated in, and guided by, functional models of the processes involved in processing and recognizing faces. The specification and predictive power of such theory has been increased with the development of an implemented model, based upon an 'interactive activation and competition' architecture. However, a major deficiency in most accounts of face processing is their failure to spell out the perceptual primitives that form the basis of our representations for faces. Possible representational schemes are discussed, and the potential role of three-dimensional representations of the face is emphasized.
An implementation of Bruce and Young's (1986) functional model of face recognition is used to examine patterns of covert face recognition previously reported in a prosopagnosic patient, PH. Although PH is unable to recognize overly the faces of people known to him, he shows normal patterns of face processing when tested indirectly. A simple manipulation of one set of connections in the implemented model induces behaviour consistent with patterns of results from PH obtained in semantic priming and interference tasks. We compare this account with previous explanations of covert recognition and demonstrate that the implemented model provides the most natural and parsimonious account available. Two further patients are discussed who show deficits in person perception. The first (MS) is prosopagnosic but shows no covert recognition. The second (ME) is not prosopagnosic, but cannot access semantic information relating to familiar people. The model provides an account of recognition impairments which is sufficiently general also to be useful in describing these patients.
Eight experiments are reported showing that subjects can remember rather subtle aspects of the configuration of facial features to which they have earlier been exposed. Subjects saw several slightly different configurations (formed by altering the relative placement of internal features of the face) of each of ten different faces, and they were asked to rate the apparent age and masculinity-femininity of each. Afterwards, subjects were asked to select from pairs of faces the configuration which was identical to one previously rated. Subjects responded strongly to the central or "prototypical" configuration of each studied face where this was included as one member of each test pair, whether or not it had been studied (Experiments 1, 2 and 4). Subjects were also quite accurate at recognizing one of the previously encountered extremes of the series of configurations that had been rated (Experiment 3), but when unseen prototypes were paired with seen exemplars subjects' performance was at chance (Experiment 5). Prototype learning of face patterns was shown to be stronger than that for house patterns, though both classes of patterns were affected equally by inversion (Experiment 6). The final two experiments demonstrated that preferences for the prototype could be affected by instructions at study and by whether different exemplars of the same face were shown consecutively or distributed through the study series. The discussion examines the implications of these results for theories of the representation of faces and for instance-based models of memory.
The extent to which faces depicted as surfaces devoid of pigmentation and with minimal texture cues ('head models') could be matched with photographs (when unfamiliar) and identified (when familiar) was examined in three experiments. The head models were obtained by scanning the three-dimensional surface of the face with a laser, and by displaying the surface measured in this way by using standard computer-aided design techniques. Performance in all tasks was above chance but far from ceiling. Experiment 1 showed that matching of unfamiliar head models with photographs was affected by the resolution with which the surface was displayed, suggesting that subjects based their decisions, at least in part, on three-dimensional surface structure. Matching accuracy was also affected by other factors to do with the view-points shown in the head models and test photographs, and the type of lighting used to portray the head model. In experiment 2 further evidence for the importance of the nature of the illumination used was obtained, and it was found that the addition of a hairstyle (not that of the target face) did not facilitate matching. In experiment 3 identification of the head models by colleagues of the people shown was compared with identification of photographs where the hair was concealed and eyes were closed. Head models were identified less well than these photographs, suggesting that the difficulties in their recognition are not solely due to the lack of hair. Women's heads were disproportionately difficult to recognise from the head models. The results are discussed in terms of their implications for the use of such three-dimensional head models in forensic and surgical applications.
In this paper we report five experiments that investigate the influence of prime faces upon the speed with which familiar faces are recognized and named. Previously, priming had been reported when the prime and target faces were closely associated, e.g., Prince Charles and Princess Diana (Bruce & Valentine, 1986). In Experiment 1 we show that there is a reliable effect of relatedness on a double-familiarity decision, even when the faces are only categorically related, e.g., Kirk Douglas and Clint Eastwood. Then it was shown that such an effect emerges only on a double decision task (Experiments 2 and 3). Experiment 4 showed that on a primed naming task, faces preceded by a categorically related prime were responded to more quickly than those preceded by an unrelated prime, and the effect was due to inhibition. Experiment 5 replicated this effect and also showed that when associatively related primes were used, a facilitatory, and not an inhibitory, effect is found. It is argued that the facilitation of associative priming arises at an earlier locus than the inhibition of categorial priming.
In this paper we describe how the microstructure of the Bruce & Young (1986) functional model of face recognition may be explored and extended using an interactive activation implementation. A simulation of the recognition of familiarity of individuals is developed which accounts for a range of published findings on the effects of semantic priming, repetition priming and distinctiveness. Finally, we offer some speculative predictions made by the model, and point to an empirical programme of research which it suggests.
Three experiments are reported in which tip-of-the-tongue states (TOTSs) were induced in subjects by reading them pieces of item-specific information. In Experiments 1 and 2, subjects attempted to name famous people. These experiments showed that, in a TOTS, seeing a picture of the face of the target person did not facilitate naming, whereas the initials of the person's name did. In Experiment 3, a similar result was obtained with a landmark-naming task. The results of the experiments are discussed with reference to current models of memory structure and name retrieval.
Mark and Todd (1983) reported an experiment in which the cardioidal strain transformation was extended to three dimensions and applied to a three-dimensional (3-D) representation of the head of a 15-year-old girl in a direction that made the transformed head appear younger to the vast majority of their subjects. The experiments reported here extend this research in order to examine whether subjects are indeed detecting cardioidal strain in three dimensions, rather than detecting changes in head slant or making 2-D comparisons of the shape of the occluding contour. Three-dimensional surfaces were obtained by measuring a real head manually (Experiment 1) and with a laser scanner (Experiment 2), and transformed to different age levels using the 3-D strain transformation described by Mark and Todd (1983). There were no statistically significant differences in the accuracy with which relative age judgments could be made in response to pairs of profiles, pairs of 3/4 views, or pairs of mixed views (profile plus 3/4 view), suggesting that subjects can indeed extract the cardioidal strain level of the head in three dimensions. However, an additional effect that emerged in these studies was that judgments were crucially affected by the instructions given to subjects, which suggests that factors other than cardioidal strain are important in making judgments about rich data structures.
Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.