Viewed: [[ro.stat.viewed]] Cited: [[ro.stat.cited]] Accessed: [[ro.stat.accessed]]
ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rfr_id=info%3Asid%2FANDS&rft_id=http://cs.anu.edu.au/&rft.title=Static Facial Expressions in the Wild (SFEW)&rft.identifier=canberra.edu.au/Collection/Dsfew001&rft.publisher=University of Canberra&rft.description= Quality data recorded in varied realistic environments is vital for effective human face related research. Facial expression analysis has been a very active field of research and many robust methods have been reported in the literature in the past. However, these methods have been experimented on different databases and using different protocols within the same databases. The lack of a standard protocol makes it difficult to compare systems and acts as a hindrance in the progress of the field. Therefore, we propose a facial expression recognition (FER) challenge in the “Wild. Currently available datasets for human facial expression analysis have been generated in highly controlled lab environments. We present a new static facial expression dataset Static Facial Expressions in the Wild (SFEW), containing a subset of static facial expression images extracted from the temporal facial expression database Dynamic Facial Expressions In the Wild (DFEW), which consists of short video sequences of facial expressions extracted from movies. Here, we propose a person independent training and testing protocol for expression recognition. While movies are often shot in somewhat controlled environments, they provide close to real world environments that are much more realistic than current datasets that were recorded in lab environments. SFEW contains both frontal and non-frontal faces, multiple faces in one scene, occlusions and different illumination conditions, which are very similar to real world scenarios. The database has been divided into equally sized sets in a person independent manner. SFEW contains 600 images and have been labelled for six basic expressions angry, disgust, fear, happy, sad, surprise and the neutral class. There are a total of 68 subjects in the database. We have presented a static facial expression database derived from movies. As part of ongoing work, we will provide baseline results based on the experimentation protocols discussed. &rft.creator=Dr Roland Goecke&rft.date=2015&rft.coverage= 149.092482,-35.231234 149.074571,-35.231234 149.074571,-35.242438 149.092482,-35.242438 149.092482,-35.231234 &rft_subject=Computer Vision&rft_subject=Information and Computing Sciences&rft_subject=Artificial Intelligence and Image Processing&rft_subject=Image Processing&rft.type=dataset&rft.language=English Go to Data Provider
Access:Other view details
Public (after contacting owner Abhinav Dhall first).
Copyright of databases rests with Abhinav Dhall.
Ph: 61 2 612 54043
Fax: 61 2 612 50010School of Computer Science Computer Science and Information Technology Building (108) The Australian National University ACT 0200, Australia
Similar datasets you may be interested in:
Debug menu is currently unavailable.