Journal of Engineering and Applied Sciences

Year: 2017
Volume: 12
Issue: 7
Page No. 1864 - 1870

Affect Recognition Challenge Bridging Across Audio, Video and Physiological Data

Authors : D. Lakshmi and R. Ponnusamy

References

Almaev, T.R. and M.F. Valstar, 2013. Local gabor binary patterns from three orthogonal planes for automatic facial expression recognition. Proceedings of the Conference on Affective Computing and Intelligent Interaction (ACII), Humaine Association, September 2-5, 2013, IEEE, Geneva, Switzerland, pp: 356-361.

Bone, D., C.C. Lee and S. Narayanan, 2014. Robust unsupervised arousal rating: A rule-based framework withknowledge-inspired vocal features. IEEE. Trans. Affective Comput., 5: 201-213.

Chen, M., Y. Zhang, Y. Li, M.M. Hassan and A. Alamri, 2015. AIWAC: affective interaction through wearable computing and cloud technology. IEEE. Wireless Commun., 22: 20-27.
CrossRef  |  Direct Link  |  

Cronbach, L.J., 1951. Coefficient alpha and the internal structure of tests. Psychometrika, 16: 297-334.
CrossRef  |  Direct Link  |  

Dawson, M., A. Schell and D. Filion, 2007. The Electrodermal System. In: Handbook of Psychophysiology. Vol. 2, Cambridge University Press, Cambridge, pp: 200-223.

Eyben, F., F. Weninger, F. Gross and B. Schuller, 2013. Recent developments in openSMILE, the munich open-source multimedia feature extractor. Proceedings of the 21st ACM International Conference on Multimedia, October 21-25, 2013, ACM, New York, USA., pp: 835-838.

Eyben, F., K.R. Scherer, B.W. Schuller, J. Sundberg and E. Andre et al., 2016. The Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing. IEEE. Trans. Affective Comput., 7: 190-202.
CrossRef  |  Direct Link  |  

Grimm, M. and K. Kroschel, 2005. Evaluation of natural emotions using self assessment manikins. Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding, November 27, 2005, IEEE, San Juan, ISBN: 0-7803-9478-X, pp: 381-385.

Halko, N., P.G. Martinsson, Y. Shkolnisky and M. Tygert, 2011. An algorithm for the principal component analysis of large data sets. SIAM J. Sci. Comput., 33: 2580-2594.
CrossRef  |  Direct Link  |  

Hall, M., E. Frank, G. Holmes, B. Pfahringer, P. Reutemann and I.H. Witten, 2009. The WEKA data mining software: An update. SIGKDD Explorat. Newslett, 11: 10-18.
CrossRef  |  Direct Link  |  

Keltner, D. and J.S. Lerner, 2010. Emotion. In: Handbook of Social Psychology. Fiske, S., D. Gilbert and G. Lindzey (Eds.). John Wiley & Sons Inc., Hoboken, New Jersey, pp: 317-331.

Knapp, R.B., J. Kim and E. Andre, 2011. Physiological signals and their use in augmenting emotion recognition for human-machine interaction. In: Emotion-Oriented Systems. Cowie, R., C. Pelachaud and P. Petta (Eds.). Springer Berlin Heidelberg, Heidelberg, Germany, ISBN: 978-3-642-15183-5, pp: 133-159.

Koelstra, S., C. Muhl, M. Soleymani, J.S. Lee and A. Yazdani et al., 2012. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affective Comput., 3: 18-31.
CrossRef  |  

Picard, R., 2014. Affective media and wearables: surprising findings. Proceedings of the 22nd ACM International Conference on Multimedia, November 3-7, 2014, ACM, New York, USA., ISBN: 978-1-4503-3063-3, pp: 3-4.

Ringeval, F., A. Sonderegger, B. Noris, A. Billard and J. Sauer et al., 2013. On the influence of emotional feedback on emotion awareness and gaze behavior. Proceedings of the Conference on Affective Computing and Intelligent Interaction (ACII), Humaine Association, September 2-5, 2013, IEEE, Geneva, Switzerland, pp: 448-453.

Ringeval, F., A. Sonderegger, J. Sauer and D. Lalanne, 2013. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), April 22-26, 2013, IEEE, Shanghai, China, ISBN: 978-1-4673-5545-2, pp: 1-8.

Ringeval, F., F. Eyben, E. Kroupi, A. Yuce and J.P. Thiran et al., 2015. Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data. Pattern Recognit. Lett., 66: 22-30.
CrossRef  |  Direct Link  |  

Ringeval, F., S. Amiriparian, F. Eyben, K. Scherer and B. Schuller, 2014. Emotion recognition in the wild: Incorporating voice and lip activity in multimodal decision-level fusion. Proceedings of the 16th International Conference on Multimodal Interaction, November 12-16, 2014, ACM, New York, USA., ISBN: 978-1-4503-2885-2, pp: 473-480.

Design and power by Medwell Web Development Team. © Medwell Publishing 2024 All Rights Reserved