• Abu-Mostafa, Y. S. & Psaltis, D. (1987). Optical neural computers. Scientific American, 256(3), 88-95.

  • Aggarwal, J.K. & Nandhakumar, N. (1988). On the computation of motion from sequences of images - A review. Proc. of IEEE, 76(8), 917-935.

  • Agin, G.J. & Binford, T.O. (1973). Computer analysis of curved objects. Proc. Int. Joint Conf. on Artificial Intelligence (IJCAI), 629-640.

  • Aizawa, K., Harashima, H., & Saito, T. (1989). Model-based analysis synthesis image coding (MBasic) system for a person's face. Signal Processing Image Communication, 1, 139-152.

  • Anderson, D. Z. (1986). Coherent optical eigenstate memory. Opt. Lett., 11(1), 56-58.

  • Andreou, A. et. al., (1991). Current mode subthreshold circuits for analog VLSI neural systems. IEEE-Neural Networks, 2(2), 205-213.

  • Azarbayejani, A., Starner, T., Horowitz, B., & Pentland, A. P. (1992). Visual head tracking for interactive graphics. IEEE Trans. Pattern Analysis and Machine Vision, special issue on computer graphics and computer vision, in press.

  • Barnard, S. T. & Thompson, W. B. (1980). Disparity analysis of images. IEEE trans. PAMI, PAMI-2, 4, 333-340.

  • Barrow, H.G. & Popplestone, R.J. (1971). Relational descriptions in picture processing. In B. Meltzer & D. Michie (Eds.), Machine Intelligence, 6. Edinburgh Univ. Press, Edinburgh.

  • Baylis, G. C., Rolls, E. T., & Leonard, C. M. (1985). Selectivity between faces in the responses of as population of neurons in the cortex in the superior temporal sulcus of the monkey. Brain Research, 342, 91- 102.

  • Bichsel, M. & Pentland, A. (1992). Topological matching for human face recognition. M.I.T. Media Laboratory Vision and Modeling Group Techical Report No. 186, June.

  • Bledsoe, W. W. (1966). The model method in facial recognition. Panoramic Research Inc., Palo Alto, CA, PRI:15, Aug.

  • Borod, J. C., St. Clair, J., Koff, E., T Alpert, M. (1990). Perceiver and poser asymmetries in processing facial emotion. Brain & Cognition, 13(2), 167-177.

  • Brooke, N. M. & Petajan, E. D. (1986). Seeing speech: Investigations into the synthesis and recognition of visible speech movements using automatic image processing and computer graphics. Proceedings of the International Conference on Speech Input/Output: Techniques and Applications, London, pp 104-109.

  • Brooke, N. M. (1989). Visible speech signals: Investigating their analysis, synthesis, and perception. In M. M. Taylor, F. Neel, & D. G. Bouwhuis (Eds.), The Structure of Multimodal Dialogue. Holland: Elsevier Science Publishers.

  • Brunelli, R. (1990). Edge projections for facial feature extraction. Technical Report 9009-12, Instituto per la Ricerca Scientifica e Tecnologica, Trento, Italy.

  • Brunelli, R. (1991). Face recognition: Dynamic programming for the detection of face outline. Technical Report 9104-06, Instituto per la Ricerca Scientifica e Tecnologica, Trento, Italy.

  • Brunelli, R. & Poggio, T. (1991). HyperBF networks for gender recognition. Proceedings Image Understanding Workshop 1991. San Mateo, CA: Morgan Kaufmann.

  • Buck, R. & Duffy, R. (1980). Nonverbal communication of affect in brain-damaged patients. Cortex, 16, 351-362.

  • Buhmann, J., Lange, J., & von der Malsburg, C. (1989). Distortion invariant object recognition by matching hierarchically labeled graphs. IJCNN International Conference on Neural Networks, (Vol. I, pp. 155-159).Washington, DC.

  • Buhmann, J., Lange, J., v.d.Malsburg, C., Vorbruggen, J. C., & Wurtz, R. P. (1991). Object recognition with Gabor functions in the Dynamic Link Architecture -- Parallel implementation on a transputer network. In B. Kosko (Ed.), Neural Networks for Signal Processing (pp. 121--159). Englewood Cliffs, NJ: Prentice Hall.

  • Burt, P. (1988a). Algorithms and architectures for smart sensing. Proceedings of the Image Understanding Workshop, April. San Mateo, CA: Morgan Kaufmann,.

  • Burt, P. J. (1988b). Smart sensing within a pyramid vision machine, Proceedings of the IEEE, 76(8), 1006-1015.

  • Cacioppo, J. T. & Dorfman, D. D. (1987). Waveform moment analysis in psychophysiological research. Psychological Bulletin, 102, 421-438.

  • Cacioppo, J. T. & Petty, R. (1981). Electromyographic specificity during covert information processing. Psychophysiology, 18(2), 518- 523.

  • Cacioppo, J. T., Petty, R. E., & Morris, K. J. (1985). Semantic, evaluative, and self-referent processing: Memory, cognitive effort, and somatovisceral activity. Psychophysiology, 22, 371-384.

  • Cacioppo, J. T., Tassinary, L. G., & Fridlund, A. F. (1990). The skeletomotor system. In J. T. Cacioppo and L. G. Tassinary (Eds.), Principles of psychophysiology: Physical, social, and inferential elements (pp. 325-384). New York: Cambridge University Press.

  • Camras, L. A. (1977). Facial expressions used by children in a conflict situation. Child Development, 48, 1431-35.

  • Cannon, S. R., Jones, G. W., Campbell, R., & Morgan, N. W. (1986). A computer vision system for identification of individuals. Proceedings of IECON (pp. 347-351).

  • Carey, S. & Diamond, R. (1977). From piecemeal to configurational representation of faces. Science, 195, 312-313.

  • Chen, H. H. & Huang, T. S. (1988). A survey of construction and manipulation of octrees. Computer Vision, Graphics, and Image Processing (CVGIP), 43, 409-431.

  • Chernoff, H. (1971). The use of faces to represent points in N- dimensional space graphically. Office of Naval Research, December, Project NR-042-993.

  • Chernoff, H. (1973). The use of faces to represent points in K- dimensional space graphically, Journal of American Statistical Association, 361.

  • Chesney, M. A., Ekman, P., Friesen, W. V., Black, G. W., & Hecker, M. H. L. (1990). Type A behavior pattern: Facial behavior and speech components. Psychosomatic Medicine, 53, 307-319.

  • Choi, C. S., Harashima, H., & Takebe, T. (1990). 3-Dimensional facial model-based description and synthesis of facial expressions (in Japanese), trans. IEICE of Japan, J73-A(7), July, pp 1270-1280.

  • Choi, C. S., Harashima, H., & Takebe T. (1991). Analysis and synthesis of facial expressions in knowledge-based coding of facial image sequences. Internationl Conference on Acoustics Speech and Signal Processing (pp. 2737-2740). New York: IEEE.

  • Churchland, P. S. & Sejnowski, T. J. (1992). The computational brain. Cambridge, MA: MIT Press.

  • Cotter, L. K., Drabik, T. J., Dillon, R. J., & Handschy, M. A. (1990). Ferroelectric-liquid-crystal silicon-integrated-circuit spatial light modulator. Opt. Lett., 15(5), 291.

  • Cottrell, G. W. & Fleming, M. K. (1990). Face recognition using unsupervised feature extraction. In Proceedings of the International Neural Network Conference (pp. 322-325).

  • Cottrell, G. W. & Metcalfe, J. (1991). EMPATH: Face, gender and emotion recognition using holons. In R. P. Lippman, J. Moody, & D. S. Touretzky (Eds.), Advances in neural information processing systems 3. (pp. 564-571). San Mateo, CA: Morgan Kaufmann..

  • Craw, I., Ellis, H., & Lishman, J. R. (1987). Automatic extraction of face features. Pattern Recognition Letters, 5, 183-187.

  • Cyberware Laboratory Inc (1990). 4020/RGB 3D scanner with color digitizer. Monterey, CA.

  • Damasio, A., Damasio, H., & Van Hoesen, G. W. (1982). Prosopagnosia: anatomic basis and behavioral mechanisms. Neurology, 32, 331-341.

  • Darwin, C. (1872). The expression of the emotions in man and animals. New York: Philosophical Library.

  • Davidson, R. J., Ekman, P., Saron, C., Senulis, J., & Friesen, W.V. (1990) Emotional expression and brain physiology I: Approach/withdrawal and cerebral asymmetry. Journal of Personality and Social Psychology, 58, 330-341.

  • Desimone, R. (1991). Face-selective cells in the temporal cortex of monkeys. Journal of Cognitive Neuroscience, 3, 1-24.

  • Drabik, T. J. & Handschy, M. A. (1990). Silicon VLSI ferroelectric liquid-crystal technology for micropower optoelectronic computing devioces. Applied Optics 29(35), 5220.

  • Duda, R. O. & Hart, P. E. (1973). Pattern classification and scene analysis. John Wiley.

  • Eggert, D. & Bowyer, K. (1989). Computing the orthographic projection aspect graph of solids of revolution. Proc. IEEE Workshop on Interpretation of 3D Scenes, (pp. 102-108). Austin, TX.

  • Ekman, P. (1972). Universals and cultural differences in facial expressions of emotion. In J. Cole (Ed.), Nebraska Symposium on Motivation 1971, (Vol. 19, pp. 207-283). Lincoln, NE: University of Nebraska Press.

  • Ekman, P. (1978). Facial signs: Facts, fantasies, and possibilities. In T. Sebeok (Ed.), Sight, Sound and Sense. Bloomington: Indiana University Press.

  • Ekman, P. (1979). About brows: Emotional and conversational signals. In J. Aschoff, M. von Carnach, K. Foppa, W. Lepenies, & D. Plog (Eds.), Human ethology (pp. 169-202). Cambridge: Cambridge University Press.

  • Ekman, P. (1982). Methods for measuring facial action. In K.R. Scherer and P. Ekman (Eds.), Handbook of methods in Nonverbal Behavior Research (pp 45- 90). Cambridge: Cambridge University Press.

  • Ekman, P. (1984). Expression and the nature of emotion. In K. Scherer and P. Ekman (Eds.), Approaches to emotion (pp. 319-343). Hillsdale, N.J.: Lawrence Erlbaum.

  • Ekman, P. (1989). The argument and evidence about universals in facial expressions of emotion. In H. Wagner & A. Manstead (Eds.), Handbook of social psychophysiology (pp. 143-164). Chichester: Wiley.

  • Ekman, P. (1992a). Facial expression of emotion: New findings, new questions. Psychological Science, 3, 34-38.

  • Ekman, P. (1992b). An argument for basic emotions. Cognition and Emotion, 6, 169-200.

  • Ekman, P. & Davidson, R. J. (1992). Voluntary smiling changes regional brain activity. Ms. under review.

  • Ekman, P., Davidson, R. J., & Friesen, W. V. (1990). Duchenne's smile: Emotional expression and brain physiology II. Journal of Personality and Social Psychology, 58, 342-353.

  • Ekman, P. & Fridlund, A. J. (1987). Assessment of facial behavior in affective disorders. In J. D. Maser (Ed.), Depression and Expressive Behavior (pp. 37-56). Hillsdale, NJ: Lawrence Erlbaum Associates.

  • Ekman, P. & Friesen, W. V. (1969). The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica, 1, 49- 98.

  • Ekman, P. & Friesen, W. V. (1975). Unmasking the face. A guide to recognizing emotions from facial clues. Englewood Cliffs, New Jersey: Prentice-Hall.

  • Ekman, P. & Friesen, W. V. (1978). Facial action coding system: A technique for the measurement of facial movement. Palo Alto, Calif.: Consulting Psychologists Press.

  • Ekman, P. & Friesen, W. V. (1986). A new pan cultural facial expression of emotion. Motivation and Emotion, 10(2), 1986.

  • Ekman, P., Friesen, W. V., & Ellsworth, P. (1972). Emotion in the human face: Guidelines for research and an integration of findings. New York: Pergamon Press.

  • Ekman, P., Friesen, W. V., & O'Sullivan, M. (1988). Smiles when lying. Journal of Personality and Social Psychology, 54, 414-420.

  • Ekman, P., Levenson, R. W., & Friesen, W. V. (1983). Autonomic nervous system activity distinguishes among emotions. Science, 221, 1208-1210.

  • Ekman, P. & O'Sullivan, M. (1988). The role of context in interpreting facial expression: Comment of Russell and Fehr (1987). Journal of Experimental Psychology, 117, 86-88.

  • Ekman, P., O'Sullivan, M., & Matsumoto, D. (1991a). Confusions about content in the judgment of facial expression: A reply to Contempt and the Relativity Thesis. Motivation and Emotion, 15, 169-176.

  • Ekman, P., O'Sullivan, M., & Matsumoto, D. (1991b). Contradictions in the study of contempt: What's it all about? Reply to Russell. Motivation and Emotion, 15, 293-296.

  • Ekman, P. & Oster, H. (1979). Facial expressions of emotion. Annual Review of Psychology, 20, 527-554.

  • Ellgring, H. (1989). Nonverbal communications in depression. Cambridge: University Press.

  • Farhat, N., Psaltis, D., Prata, A., & Paek, E. (1985). Optical implementation of the Hopfield Model. Appl. Opt., 24(10), 1469- 1475.

  • Finn, K. (1986). An investigation of visible lip information to be used in automatic speech recognition. PhD Dissertation, Georgetown University.

  • Fischler, M.A. & Elschlager, R.A. (1973). The representation and matching of pictorial structures. IEEE Trans. on Computers, C-22, 1, 67-92.

  • Fisher, C. G. (1968). Confusions among visually perceived consonants. Journal of Speech and Hearing Research, 11, 796-804.

  • Frenkel, K.A. (1994). The human genome project and informatics, Comm. of the ACM, 34(11), Nov.

  • Fridlund, A.J. (1991). Evolution and facial action in reflex, social motive, and paralanguage. Biological Psychology, 32, 3-100.

  • Fried, L. A. (1976). Anatomy of the head, neck, face, and jaws. Philadelphia: Lea and Febiger.

  • Friedman J.H. & Stuetzle, W. (1981). Projection pursuit regressio. Journal of the American Statistics Association, 76(376), 817-823.

  • Friedman, S. M. (1970). Visual anatomy: Volume one, head and neck. New York: Harper and Row.

  • Friesen, W.V. & Ekman, P. (1987). Dictionary - Interpretation of FACS Scoring. Unpublished manuscript.

  • Garcia, O. N., Goldschen, A. J., & Petajan, E. D. (1992). Feature extraction for optical automatic speech recognition or automatic lipreading. Technical Report GWU-IIST-9232, Department of Electrical Engineering and Computer Science, George Washington University, Washington, DC.

  • Gillenson, M.L. (1974). The interactive generation of facial images on a CRT using a heuristic strategy. Ohio State University, Computer Graphics Research Group, The Ohio State University, Research Center, 1314 Kinnear Road, Columbus, Ohio 434210.

  • Godoy, J. F. & Carrobles, J. A. (1986). Biofeedback and facial paralysis: An experimental elaboration of a rehabilitation program. Clinical Biofeedback & Health: An International Journal, 9(2), 124-138.

  • Goldin-Meadow, S., Alibali, M. W., & Church, R. B. (in press). Transitions in concept acquisition: Using the hand to read the mind. Psychological Review.

  • Goldstein, A. J., Harmon, L. D., & Lesk, A. B. (1971). Identification of human faces, Proceedings of IEEE, 59, 748.

  • Golomb, B.A., Lawrence, D.T., & Sejnowski, T.J. (1991). SEXNET: A neural network identifies sex from human faces. In D.S. Touretzky & R. Lippman (Eds.), Advances in Neural Information Processing Systems, 3, San Mateo, CA: Morgan Kaufmann.

  • Govindaraju, V. (1992). A computational model for face location. Ph.D. Dissertation, The State University of New York at Buffalo.

  • Gray, M., Lawrence, D., Golomb, B., & Sejnowski, T. (1993). Perceptrons reveal the face of sex. Institute for Neural Computation Technical Report, University of California, San Diego.

  • Grimson, W. E. L. (1983). An implementation of a computational theory of visual surface interpolation. Computer Vision, Graphics, and Image Processing (CVGIP) 22 (1), 39--69.

  • Gross, C. G., Rocha-Miranda, C. E., & Bender, D. B. (1972). Visual properties of neurons in inferotemporal cortex of the macaque. Neurophysiology, 35, 96-111.

  • Hager, J. C. (1985). A comparison of units for visually measuring facial action. Behavior research methods, instruments and computers, 17, 450-468.

  • Hager, J. C., & Ekman, P. (1985). The asymmetry of facial actions is inconsistent with models of hemispheric specialization. Psychophysiology, 22(3), 307-318.

  • Hall, J. A. Gender effects in decoding nonverbal cues. (1978). Psychological Bulletin, 85, 845-857.

  • Hallinan, P. W. (1991). Recognizing human eyes. SPIE Proceedings, V. 1570, Geometric Method in Computer Vision, 214-226.

  • Harris, J., Koch, C., & Staats, C. (1990). Analog hardware for detecting discontinuities in early vision. International Journal of Computer Vision, 4(3), 211-223.

  • Haxby, J. V., Grady, C. L., Horwitz, B., Ungerleider, L. G., Mishkin, M., Carson, R. E., Herscovitch, P., Schapiro, M. B., & Rapoport, S. I. (1991). Dissociation of object and spatial visual processing pathways in human extrastriate cortex. Proceedings of the National Academy of Sciences of the United States of America, 88(5), 1621-1625.

  • Henneman, E. (1980). Organization of the motoneuron pool: The size principle. In V. E. Mountcastle (Ed.), Medical physiology (14th ed., Vol. 1, pp. 718-741). St. Louis, Mosby.

  • Heywood, C. A. & Cowey, A. (1992). The role of the 'face-cell' area in the discrimination and recognition of faces by monkeys. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 335(1273), 31-37; discussion 37-38.

  • Hill, D.R., Pearce, A., & Wyvill, B. (1988). Animating speech: An automated aproach using speech synthesis by rules. The Visual Computer, 3, 277-289.

  • Horn, B. K. P. (1986). Robot Vision. McGraw-Hill.

  • Horn, B. K. P. & Schunck, B. G. (1981). Determining optical flow. Artificial Intelligence, 17, 185-203.

  • Horton, S. V. (1987). Reduction of disruptive mealtime behavior by facial screening: A case study of a mentally retarded girl with long- term follow-up. Behavior Modification, 11(1), 53-64.

  • Huang, T. S. (1987). Motion analysis. In S. Shapiro (Ed.), Encyclopedia of Artificial Intelligence. John Wiley.

  • Huang, T. S. & Orchard, M. T. (1992). Man-machine interaction in the 21st century: New paradigms through dynamic scene analysis and synthesis, Proc. SPIE Conf. on Visual Communications and Image Processing '92, Vol. 1818 (pp. 428-429). Nov. 18-20, Boston, MA.

  • Huang T. S., Reddy, S. C., & Aizawa, K. (1991). Human facial motion modeling, analysis, and synthesis for video compression. Proceedings of SPIE, 1605, 234-241.

  • Hubel, D. H. & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. Journal of Physiology (London), 160, 106-154.

  • Hurwitz, T. A., Wada, J. A., Kosaka, B. D., & Strauss, E. H. (1985). Cerebral organization of affect suggested by temporal lobe seizures. Neurology, 35(9), 1335-1337.

  • Izard, C. E. (1971). The face of emotion. New York: Appleton-Century- Crofts.

  • Izard, C. E. (1977). Human emotions. New York: Academic Press.

  • Izard, C. E. (1979). The maximally discriminative facial movement coding system (MAX). Unpublished manuscript. Available from Instructional Resource Center, University of Delaware, Newark, Delaware.

  • Jaeger, J., Borod, J. C., & Peselow, E. (1986 ). Facial expression of positive and negative emotions in patients with unipolar depression. Journal of Affective Disorders, 11(1), 43-50.

  • Johnson, M. H. & Morton, J. (1991). Biology and cognitive development: The case of face recognition. Oxford, UK; Cambridge, Mass: Blackwell.

  • Jordan, M. I. & Rumelhart, D. E. (1992). Forward models - Supervised learning with a distal teacher. Cognitive Science, 16(3), 307-354.

  • Kanade, T. (1973). Picture processing system by computer complex and recognition of human faces. Dept. of Information Science, Kyoto University, Nov.

  • Kanade, T. (1977). Computer recognition of human faces. Basel and Stuttgart: Birkhauser Verlag.

  • Kanade, T. (1981). Recovery of the 3D shape of an object from a single view. Artificial Intelligence 17, 409-460.

  • Kass, M., Witkin, A., & Terzopoulos, D. (1987). Snakes: Active contour models. Proc. ICCV-87, June, 259-268.

  • Kaya, Y. & Kobayashi, K. (1972). A basic study of human face recognition. In A. Watanabe (Ed.), Frontier of Pattern Recognition (pp. 265).

  • Kleiser, J. (1989). A fast, efficient, accurate way to represnt the human face. State of the Art in Facial Animation. ACM, SIGGRAPH `89 Tutorials, 22, 37-40.

  • Koenderink, J. J. & van Doorn, A. J. (1979). The internal representation of solid shape with respect to vision. Biological Cybernetics 32, 211-216.

  • Kohonen, T., Lehtio, P., Oja, E., Kortekangas, A., & Makisara, K. (1977). Demonstration of pattern processing properties of the optimal associative mappings. Proc Intl. Conf. on Cybernetics and Society, Wash., D.C.

  • Kolb, B. & Milner, B. (1981). Performance of complex arm and facial movements after focal brain lesions. Neuropsychologia, 19(4), 491- 503.

  • Komatsu, K. (1988). Human skin capable of natural shape variation. The Visual Computer, 3, 265-271,

  • Krause, R., Steimer, E., Sanger-Alt, C., & Wagner, G. (1989). Facial expression of schizophrenic patients and their interaction partners. Psychiatry, 52, 1-12.

  • Larrabee, W. (1986). A finite element model of skin deformation. Laryngoscope, 96, 399-419.

  • Lee, H. C. (1986). Method for computing the scene-illuminant chromaticity from specular highlights. Jour. Optical Society of America A (JOSA A) 3 (10), 1694-1699.

  • Lee, K. F. (1989). Automatic speech recognition: The development the SPHINX system. Boston: Kluwer Academic Publishers.

  • Leung, M. K. & Huang, T. S. (1992). An integrated approach to 3D motion analysis and object recognition, IEEE Trans PAMI, 13(10), 1075-1084.

  • Levenson, R. W., Ekman, P., & Friesen, W. V. (1990). Voluntary facial action generates emotion-specific autonomic nervous system activity. Psychophysiology, 27, 363-384.

  • Lewis, J. P. & Parke, F. I. (1987). Automatic lip-synch and speech synthesis for character animation. CHI+CG `87, Toronto, 143-147.

  • Li, H. Y., Qiao, Y., & Psaltis, D. (no date). An optical network for real time face recognition, Appl. Opt. Sejnowski/Pentland,

  • Lisberger, S. G. & Sejnowski, T. J. (1992). Computational analysis predicts the site of motor learning in the vestibulo-ocular reflex. Technical Report INC-92.1, UCSD.

  • Magnenat-Thalmann, N., Primeau, N.E., & Thalmann, D. (1988). Abstract muscle actions procedures for human face animation. Visual Computer, 3(5), 290-297.

  • Mahowald, M. & Mead, C. (1991). The silicon retina. Scientific American, 264(5), 76-82.

  • Mandal, M. K. & Palchoudhury, S. (1986). Choice of facial affect and psychopathology: A discriminatory analysis. Journal of Social Behavior & Personality, 1(2), 299-302.

  • Maniloff, E. S. & Johnson, K. M. (1990). Dynamic holographic interconnects using static holograms. Opt. Eng., 29(3), 225-229.

  • Marr, D. (1982). Vision. San Francisco: W.H. Freeman.

  • Marx, D., Zofel, C., Linden, U., Bonner, H. et al. (1986). Expression of emotion in asthmatic children and their mothers. Journal of Psychosomatic Research, 30(5), 609-616.

  • Mase, K. (1991). Recognition of facial expression from optical flow. IEICE Transactions, E 74, 10, 3474-3483.

  • Mase, K. & Pentland, A. (1990a). Lip reading by optical flow, IEICE of Japan, J73-D-II, 6, 796- 803.

  • Mase, K. & Pentland, A. (1990b). Automatic lipreading by computer. Trans. Inst. Elec. Info. and Comm. Eng., J73-D-II(6), 796-803.

  • Mase, K. & Pentland, A. (1991). Automatic lipreading by optical flow analysis. Systems and Computers in Japan, 22(6), 67-76.

  • Mase, K., Watanabe, Y., & Suenaga, Y. (1990). A real time head motion detection system. Proceedings SPIE, 1260, 262-269,.

  • McCown, W., Johnson, J., & Austin, S. (1986). Inability of delinquents to recognize facial affects. First International Conference on the Meaning of the Face (1985, Cardiff, Wales). Journal of Social Behavior & Personality, 1(4), 489-496.

  • McCown, W. G., Johnson, J. L., & Austin, S. H. (1988). Patterns of facial affect recognition errors in delinquent adolescent males. Journal of Social Behavior & Personality, 3(3), 215-224.

  • McGuigan, F. J. (1970). Covert oral behavior during the silent performance of language tasks. Psychological Bulletin, 74, 309-326.

  • McHugo, G. J., Lanzetta, J. T., Sullivan, D. G., Masters, R. D., & Englis, B. G. (1985). Emotional reactions to a political leader's expressive displays. Journal of Personality and Social Psychology, 49, 1513- 1529.

  • McKendall, R. & Mintz, M. (1989). Robust fusion of location information. Preprint. Dept. of Computer & Info. Sci., Univ. of Pennsylvania..

  • Mead, C. (1989). Analog VLSI and Neural Systems. New York: Addison-Wesley Publishing Company.

  • Moffitt, F.H. & Mikhail, E.M. (1980). Photogrammetry (3rd ed.). Harper & Row.

  • Monrad-Krohn, G. H. (1924). On the dissociation of voluntary and emotional innervation in facial paresis of central origin. Brain, 47, 22-35.

  • Montgomery, A. & Jackson, P. (1983). Physical characteristics of the lips underlying vowel lipreading performance. Journal of Acoustical Society of America, 73(6), 2134-2144.

  • Nahas, M., Huitric, H., & Sanintourens, M. (1988). Animation of a B- spline figure. The Visual Computer, 3, 272-276.

  • Nassimbene, E. (1965). U. S. Patent No. 3192321, June 29.

  • Nishida, S. (1986). Speech recognition enhancement by lip information. ACM SIGCHI Bulletin, 17(4), 198-204.

  • O'Rourke, J. & Badler, N. (1980). Model-based image analysis of human motion using constraint propagation. IEEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-2, 6, 522-536.

  • O'Toole, A. J., Abdi, H., Deffenbacher, K. A., & Bartlett, J. C. (1991). Classifying faces by race and sex using an autoassociative memory trained for recognition. In Proceedings of the Thirteenth Annual Cognitive Science Society, August 1991, Chicago, IL. pp. 847-851. Hillsdale: Lawrence Erlbaum.

  • O'Toole, A. J., Abdi, H., Deffenbacher, K. A., & Valentin, D. (1993). Low-dimensional representation of faces in higher dimensions of the face space. Journal of the Optical Society of America, A10, 405-411.

  • O'Toole, A. J., Millward, R. B., Richard, B., & Anderson, J. A. (1988). A physical system approach to recognition memory for spatially transformed faces. Neural Networks, 2, 179-199.

  • Ohmura, K., Tomono, A., &. Kobayashi, Y. (1988). Method of detecting face direction using image processing for human interface. Proceedings of SPIE, 1001, 625-632.

  • Oka, M. Tsutsui, K., Ohba, A., Kurauchi, Y., & Tago, T. (1987). Real-time manipulation of texture-mapped surfaces. Computer Graphics, 21(4), 181-188.

  • Oster, H., Hegley, D., & Nagel, (in press). Adult judgment and fine- grained analysis of infant facial expressions: Testing the validity of a priori coding formulas. Developmental Psychology.

  • Owechko, Y., Dunning, G. J., Marom, E., & Soffer, B. H. (1987). Holographic associative memory with nonlinearities in the correlation domain. Appl. Opt., 26(10), 1900-1910.

  • Paek, E. G. & Jung, E. C. (1991). Simplified holographic associative memory using enhanced nonlinear processing with a thermalplastic plate. Opt. Lett., 16(13), 1034-1036.

  • Parke, F. I. (1972a). Computer generated animation of faces. University of Utah, Salt Lake City, June, UTEC-CSc-72-120.

  • Parke, F. I. (1972b). Computer generated animation of faces. ACM Nat'l Conference, 1, 451-457.

  • Parke, F. I. (1974). A parameteric model for human faces. University of Utah, UTEC-CSc-75-047, Salt Lake City, Utah, December.

  • Parke, F. I. (1975). A model of the face that allows speech synchronized speech. Journal of Computers and Graphics, 1, 1-4.

  • Parke, F. I. (1982). Parameterized models for facial animation. IEEE Computer Graphics and Applications, 2(9), 61-68.

  • Patel, M. & Willis, P. J. (1991). FACES: Facial animation, construction and editing system. In F. H. Post and W. Barth (Eds.), EUROGRAPHICS '91 (pp. 33-45). Amsterdam: North Holland.

  • Pearce, A., Wyvill, B., Wyvill, G., & Hill, D. (1986). Speech and expression: A computer solution to face animation. In M. Wein and E. M. Kidd (Eds.), Graphics Interface '86 (pp. 136-140). Ontario: Canadian Man-Computer Communications Society.

  • Pearlmutter, B. (1989). Learning state space trajectories in recurrent neural networks. Neural Computation, 1, 263-269.

  • Pelachaud, C. (1991). Communication and coarticulation in facial animation. University of Pennsylvania, Department of Computer and Information Science, October.

  • Pentland, A. (1992). personal communication.

  • Pentland, A., Etcoff, N., Starner, T. (1992). Expression recognition using eigenfeatures. M.I.T. Media Laboratory Vision and Modeling Group Techical Report No. 194, August.

  • Pentland, A. & Horowitz, B. (1991). Recovery of non-rigid motion. IEEE Trans. Pattern Analysis and Machine Vision, 13(7), 730-742.

  • Pentland, A. & Mase, K. (1989). Lip reading: Automatic visual recognition of spoken words. MIT Media Lab Vision Science Technical Report 117, January 15.

  • Pentland, A. & Sclaroff, S. (1991). Closed-form solutions for physically-based modeling and recognition IEEE Trans. Pattern Analysis and Machine Vision, 13(7), 715-730.

  • Perkell, J. S. (1986). Coarticulation strategies: Preliminary implications of a detailed analysis of lower lip protrusion movements. Speech Communication, 5(1), 47-68.

  • Perrett, D. I., Rolls, E. T., & Caan, W. (1982). Visual neurones responsive to faces in the monkey temporal cortex. Experimental Brain Research, 47, 329-342.

  • Perrett, D. I., Smith, P. A. J., Potter, D. D., Mistlin, A. J., Head, A. S., Milner, A. D., & Jeeves, M. A. (1984). Neurones responsive to faces in the temporal cortex: studies of functional organization, sensitivity to identity and relation to perception. Human Neurobiology, 3, 197-208.

  • Perry, J. L. & Carney, J. M. (1990). Human face recognition using a multilayer perceptron. Proceedings of International Joint Conference on Neural Networks, Washington D.C. Volume 2, pp. 413-416.

  • Petajan, E. D. (1984). Automatic lipreading to enhance speech recognition. PhD Dissertation, University of Illinois at Urbana- Champaign.

  • Petajan, E. D., Bischoff, B., Bodoff, D., & Brooke, N. M. (1988). An improved automatic lipreading system to enhance speech recognition. CHI 88, 19-25.

  • Pieper, S. D. (1989). More than skin deep: Physical modeling of facial tissue. Massachusetts Institute of Technology, 1989, Media Arts and Sciences, MIT.

  • Pieper, S. D. (1991). CAPS: Computer-aided plastic surgery. Massachusetts Institute of Technology, Media Arts and Sciences, MIT, September.

  • Pigarev, I. N., Rizzolatti, G., & Scandolara, C. (1979). Neurones responding to visual stimuli in the frontal lobe of macaque monkeys. Neuroscience Letters, 12, 207-212.

  • Platt, S. M. (1980). A System for Computer Simulation of the Human Face.,The Moore School, 1980, Pennsylvania.

  • Platt, S. M. (1985). A structural model of the human face. The Moore School, Pennsylvania.

  • Platt, S. M. & Badler, N. I. (1981). Animating facial expressions. Computer Graphics, 15(3), 245-252.

  • Poeck, K. (1969). Pathophysiology of emotional disorders associated with brain damage. In P. J. Vinken & G. W. Bruyn (Eds.), Handbook of Clinical Neurology V.3 Amsterdam: North Holland.

  • Poggio, T. (1990). A theory of how the brain might work. In Cold Spring Harbor Symposia on Quantitative Biology (pp. 899-910). Cold Spring Harbor Laboratory Press..

  • Ponce, J. & Kriegman, D. J. (1989). On Recognizing and positioning curved 3D objects from image contours. Proc. IEEE Workshop on Interpretation of 3D Scenes, Austin, TX.

  • Prkachin, K. M. & Mercer, S. R. (1989). Pain expression in patients with shoulder pathology: validity properties and relationship to sickness impact. Pain, 39, 257-265.

  • Psaltis, D., Brady, D., Gu, X.-G., & Lin, S. (1990). Holography in artificial neural networks. Nature, 343(6256), 325-330.

  • Psaltis, D., Brady, D., & Wagner, K. (1988). Adaptive optical networks using photorefractive crystals. Appl. Opt., 27(9), 1752-1759.

  • Psaltis, D. & Farhat, N. (1985). Optical information processing based on an associative-memory model of neural nets with thresholding and feedback. Opt. Lett., 10(2), 98-100.

  • Reeves, A. G. & Plum, F. (1969). Hyperphagia, rage, and dementia accompanying a ventromedial hypothalamic neoplasm. Archives of Neurology, 20(6), 616-624.

  • Requicha, A.A.G. (1980). Representations of rigid solids. ACM Computing Surveys, 12, 437-464.

  • Rhiner, M. & Stucki, P. Database Requirements for Multimedia Applications. In L. Kjelldahl (Ed.), Multimedia: Systems, Interaction and Applications, Springer, 1992.

  • Rinn, W. E. (1984). The neuropsychology of facial expression: A review of the neurological and psychological mechanisms for producing facial expressions. Psychological Bulletin, 95, 52-77.

  • Rolls, E. T. (1984). Neurons in the cortex of the temporal lobe and in the amygdala of the monkey with responses selective for faces. Human Neurobiology, 3, 209-222.

  • Rolls, E. T., Baylis, G. C., & Hasselmo, M. E. (1987). The responses of neurons in the cortex in the superior temporal sulcus of the monkey to band-pass spatial frequency filtered faces. Vision Research, 27(3), 311-326.

  • Rolls, E. T., Baylis, G. C., Hasselmo, M. E., & Nalway, V. (1989). The effect of learning on the face selective responses of neurons in the cortex in the superior temporal sulcus of the monkey. Experimental Brain Research, 76(1), 153-164.

  • Rosenfeld, H. M. (1982). Measurement of body motion and orientation. In K.R. Scherer & P. Ekman (Eds.), Handbook of methods in nonverbal behavior research. (pp. 199-286). Cambridge: Cambridge University Press.

  • Rumelhart, D. E., Hinton, G., & Williams, R. J. (1986). Learning internal representation by error propagation. In D. E. Rumelhart and J. L. McClelland (Eds.), Parallel Distributed Processing, Explorations in the microstructure of cognition (pp. 318-362). Cambridge, Mass.: MIT Press.

  • Russell, J. A. (1991a). The contempt expression and the relativity thesis. Motivation and Emotion, 15, 149-168.

  • Russell, J. A. (1991b). Rejoinder to Ekman, O'Sullivan and Matsumoto. Motivation and Emotion, 15, 177-184.

  • Russell, J. (1991c). Negative results on a reported facial expression of contempt. Motivation and Emotion, 15, 281-291.

  • Russell, J. A. & Fehr, B. (1987). Relativity in the perception of emotion in facial expressions. Journal of Experimental Psychology, 116, 233-237.

  • Sakai, T., Nagao, M., & Fujibayashi, S. (1969). Line extraction and pattern detection in a photograph, Pattern Recognition, 1, 233-248.

  • Sakai, T., Nagao, M., & Kanade, T. (1972). Computer analysis and classification of photographs of human faces. First USA-JAPAN Computer Conference, session 2-7.

  • Satoh, Y., Miyake, Y., Yaguchi, H., & Shinohara, S. (1990). Facial pattern detection and color correction from negative color film, Journal of Imaging Technology, 16(2), 80-84.

  • Schachter, J. (1957). Pain, fear and anger in hypertensives and normotensives: a psychophysiological study. Psychosomatic Medicine, 19, 17-29.

  • Sejnowski, T. J. & Churchland, P. S. (1992). Silicon brains. Byte, 17(10), 137-146.

  • Sejnowski, T. J. & Rosenberg, C. R. (1987). Parallel networks that learn to pronounce English text. Complex Systems, 1, 145-168.

  • Sergent, J., Ohta, S., & MacDonald, B. (1992). Functional neuroanatomy of face and object processing. A positron emission tomography study. Brain, 115(Pt 1), 15-36.

  • Sethi, I. K. & Jain, R. (1987). Finding trajectories of feature points in a monocular image sequence. IEEE trans. PAMI, PAMI-9(1), 56-73.

  • Smith, S. (1989). Computer lip reading to augment automatic speech recognition. Speech Techology, 175-181.

  • Steimer-Krause, E., Krause, R., & Wagner, G. (1990). Interaction regulations used by schizophrenics and psychosomatic patients. Studies on facial behavior in dyadic interactions. Psychiatry, 53, 209- 228.

  • Stern, J. A., & Dunham, D. N. (1990). The ocular system. In J. T. Cacioppo and L. G. Tassinary (Eds.), Principles of psychophysiology: Physical, social, and inferential elements (pp. 513-553). New York: Cambridge University Press.

  • Stork, D. G., Wolff, G., & Levine, E. (1992). Neural network lipreading system for improved speech recognition. Proceedings of the 1992 International Joint Conference on Neural Networks, Baltimore, MD.

  • Sumby, W. H. & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustic Society of America, 26, 212-215.

  • Tanner, J. E. & Mead, C. A. (1984). A correlating optical motion detector. In Penfield (Ed.), Proceedings of Conference on Advanced Research in VLSI, January, MIT, Cambridge, MA.

  • Tassinary, L. G., Cacioppo, J. T., & Geen, T. R. (1989). A psychometric study of surface electrode placements for facial electromyographic recording: I. The brow and cheek muscle regions. Psychophysiology, 26, 1-16.

  • Terzopoulos, D. & Waters, K. (1990a). Analysis of facial images using physical and anatomical models. Proceedings of the International Conference on Computer Vision, 1990, 727-732.

  • Terzopoulos, D. & Waters, K. (1990b). Physically-based facial modeling, analysis, and animation. Journal of Visualization and Computer Animation, 1(4), 73-80.

  • Tolbruck, T. (1992). Analog VLSI visual transduction and motion processing. Caltech Ph.D. thesis.

  • Tranel, D., Damasio, A. R., Damasio, H. (1988). Intact recognition of facial expression, gender and age inpatients with impaired recognition of face identity. Neurology, 38, 690-696.

  • Turk, M. A. (1991). Interactive time vision: Face recognition as a visual behavior. Ph.D. Thesis, MIT.

  • Turk, M. A. & Pentland, A.P. (1989). Face processing: Models for recognition. SPIE, Intelligent Robots and Computer Vision VIII, 192.

  • Turk, M. A. & Pentland, A. P. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1), 71-86.

  • Van Dongen, H. R., Arts, W. F., & Yousef-Bak, E. (1987). Acquired dysarthria in childhood: An analysis of dysarthric features in relation to neurologic deficits. Neurology, 37(2), 296-299.

  • Van Gelder, R. S. & Van Gelder, L. (1990). Facial expression and speech: Neuroanatomical considerations. Special Issue: Facial asymmetry: Expression and paralysis. International Journal of Psychology, 25(2), 141-155.

  • Vannier, M. W., Pilgram, T., Bhatia, G., & Brunsden, B. (1991). Facial surface scanner. IEEE Computer Graphics and Applications, 11(6), 72- 80.

  • Vaughn, K. B., & Lanzetta, J. T. (1980). Vicarious instigation and conditioning of facial expressive and autonomic responses to a model's expressive display of pain. Journal of Personality and Social Psychology, 38, 909-923.

  • Viennet, E. & Fogelman-Soulie, F. (1992). Multiresolution scene segmentation using MLPs. Proceedings of International Joint Conference on Neural Networks, V. III (pp. 55-59). Baltimore.

  • Wagner, K. & Psaltis, D. (1987). Multilayer optical learning networks, Appl. Opt, 26(23), 5061- 5076.

  • Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., & Lang, K. (1989). Phoneme recognition using time-delay neural networks. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37, 328- 339.

  • Waite, C. T. (1989). The Facial Action Control Editor, Face: A Parametric Facial Expression Editor for Computer Generated Animation. Massachusetts Institute of Technology, Media Arts and Sciences, Cambridge, Febuary.

  • Waite, J. B. & Welsh, W. J. (1990). Head boundary location using snakes, British Telecom Technol. J. 8(3), 127-136.

  • Wang, S.-G. & George, N. (1991). Facial recognition using image and transform representations. In Electronic Imaging Final Briefing Report, U.S. Army Research Office, P-24749-PH, P-24626-PH-UIR, The Institute of Optics, University of Rochester, New York.

  • Watanabe, Y. & Suenaga, Y. (1992). A trigonal prism-based method for hair image generation. IEEE Computer Graphics and Applications, January, 47-53.

  • Waters, K. (1986). Expressive three-dimensional facial animation. Computer Animation (CG86), October, 49-56.

  • Waters, K. (1987). A muscle model for animating three-dimensional facial expressions. Computer Graphics (SIGGRAPH'87), 21(4), July, 17- 24.

  • Waters, K. (1988). The computer synthesis of expressive three- dimensional facial character animation. Middlesex Polytechnic, Faculty of Art and Design, Cat Hill Barnet Herts, EN4 8HT, June.

  • Waters, K. & Terzopoulos, D. (1990). A physical model of facial tissue and muscle articulation. Proceedings of the First Conference on Visualization in Biomedical Computing, May, 77-82.

  • Waters, K. & Terzopoulos, D. (1991). Modeling and animating faces using scanned data. Journal of Visualization and Animation, 2(4), 123-128.

  • Waters, K. & Terzopoulos, D. (1992). The computer synthesis of expressive faces. Phil. Trans. R. Soc. Lond., 355(1273), 87-93.

  • Welsh, W. J., Simons, A. D., Hutchinson, R. A., Searby, S. (no date). Synthetic face generation for enhancing a user interface, British Telecom Research Laboratories, Martlesham Heath, Ipswich IP5 7RE, UK.

  • Will, P. M. & Pennington, K. S. (1971). Grid Coding: A preprocessing technique for robot and machine vision. Artificial Intelligence, 2, 319-329.

  • Williams, L. (1990). Performace driven facial animation. Computer Graphics, 24(4), 235-242.

  • Witkin, A. P. (1981). Recovering surface shape and orientation from texture. Artificial Intelligence 17, 17-45.

  • Wolff, L. B. (1988). Shape from photometric flow fields. Proc. SPIE, Optics, Illumination, and Image Sensing for Machine Vision, III, (pp. 206-213) Cambridge, MA.

  • Wong, K. H., Law, H. H. M., & Tsang, P. W. M. (1989). A system for recognizing human faces, Proceedings of ICASSP, May, pp. 1638- 1641.

  • Wyvill, B. (1989). Expression control using synthetic speech. State of the Art in Facial Animation, SIGGRAPH `89 Tutorials, ACM, 22, 163- 175.

  • Yamamoto, M. & Koshikawa, K. (1991). Human motion analysis based on a robot arm model. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 664- 665). June, Maui, Hawaii.

  • Yamana, T. & Suenaga, Y. (1987). A method of hair representation using anisotropic reflection. IECEJ Technical Report PRU87-3, May, 15-20, (in Japanese).

  • Yamato, J., Ohya, J., & Ishii, K. (1992). Recognizing human action in time-sequential images using hidden Markov model. Proc. IEEE Conference on Computer Vision and Pattern Recognition, Champaign, Illinois, 379-385.

  • Yeh, P., Chiou, A.E.T., & Hong, J. (1988). Optical interconnection using photorefractive dynamic holograms. Appl. Opt., 27(11), 2093-2096.

  • Young, M. P. & Yamane, S. (1992). Sparse population coding of faces in the inferotemporal cortex. Science, 56, 1327-1331.

  • Yuhas, B., Goldstein, M. Jr., & Sejnowski, T. (1989). Integration of acoustic and visual speech signals using neural networks. IEEE Communications Magazine, November, 65-71,

  • Yuille, A. L. (1991). Deformable templates for face recognition. Journal of Cognitive Neuroscience, 3(1), 59-70.

  • Yuille, A. L., Cohen, D. S., & Hallinan, P. W. (1989). Feature extraction from faces using deformable templates. IEEE proc. CVPR(June), 104- 109.

  • Yuille, A. L. & Hallinan, P. W. (1992). Deformable Templates. In A. Blake & A.L. Yuille (Eds.), Active Vision. Cambridge, Mass: MIT Press.

  • Zajonc, R. B. (1984). The interaction of affect and cognition. In K. R. Scherer and P. Ekman (Eds.), Approaches to Emotion (pp. 239-246). Hillsdale, N.J.: Lawrence Erlbaum Associates.

    This page has been accessed 11745 times since Tue Jan 6 16:12:21 PST 1998.