Enhancing Emotion Recognition through Deep Learning and Brain-Computer Interface Technology
Tanvir Anjum Labir1, Poly Rani Ghosh 1*, Halima Mowla Anna 1
Journal of Primeasia 4(1) 1-15 https://doi.org/10.25163/primeasia.4140046
Submitted: 09 April 2023 Revised: 17 June 2023 Published: 20 June 2023
Utilizing deep learning and BCI for emotion recognition enhances understanding and application of neural network technology in neuroscience research.
Abstract
Deep learning, a specialized branch of machine learning, uses complex neural networks for advanced data processing tasks in which feature extraction and progression require more than one layer of processing using a Neural Network. The Brain Computer Interface is a direct communication link between the brain's electrical activity and an external equipment. The actual brain data is gathered using the BCI technology with electrodes fitted in Brain cells and the research focuses on the recognition of multimodal emotions after feature extraction and multilayer processing with an Artificial Neural Network (ANN). Several organizations have lately released enormous datasets containing experimental data on people's physiological signals (EEGs, eye movements) while they experience different emotions. These datasets collections are being exclusively designed to encourage the development of successful deep-learning emotion identification algorithms where the volunteers had to do some set of tasks regarding the research purpose. In this study, we evaluate deep learning approaches more specifically by the Artificial Neural Network (ANN) using the SEED-IV Electroencephalogram (EEG) dataset. In this paper, we briefly represented the implementation of Emotion Recognition using Artificial Neural Network (ANN), which is one of the topics of our research. Furthermore, we explore the operating principles of artificial neural networks, which are our primary model for identifying emotions. Finally, we offer a few proposals for implementing neural networks on EEG inputs. With the ANN and 3D convolutional layers, we finally achieved a 63.425% accuracy rate.
Keywords: Deep Learning, Brain-Computer Interface, Emotion Recognition, Artificial Neural Networks, EEG Data
References
Abhang, P. A., Gawali, B. W., & Mehrotra, S. C. (2016). Introduction to EEG- and speech-based emotion recognition. Academic Press.
Abiodun, O. I., Jantan, A., Omolara, A. E., Dada, K. V., Mohamed, N. A., & Arshad, H. (2018). State-of-the-art in artificial neural network applications: A survey. Heliyon, 4(11).
Avilov, O., Rimbert, S., Popov, A., & Bougrain, L. (2020, July). Deep learning techniques to improve intraoperative awareness detection from electroencephalographic signals. In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) (pp. 142-145). IEEE.
Bamdad, M., Zarshenas, H., & Auais, M. A. (2015). Application of BCI systems in neurorehabilitation: A scoping review. Disability and Rehabilitation: Assistive Technology, 10(5), 355-364.
Bansal, D., & Mahajan, R. (2019). EEG-based brain-computer interfaces: Cognitive analysis and control applications. Academic Press.
Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828.
Bozinovska, L., Bozinovski, S., & Stojanov, G. (1992, August). Electroexpectogram: Experimental design and algorithms. In Proceedings of the 1992 International Biomedical Engineering Days (pp. 55-60). IEEE.
Ciregan, D., Meier, U., & Schmidhuber, J. (2012, June). Multi-column deep neural networks for image classification. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 3642-3649). IEEE.
Clerc, M., & Bougrain, L. (Eds.). (2016). Brain–Computer Interfaces 2. Wiley-ISTE.
Deng, L., & Yu, D. (2014). Deep learning: Methods and applications. Foundations and Trends® in Signal Processing, 7(3–4), 197-387.
Fitzpatrick, T. (2006, July 27). Teenager moves video icons just by imagination. Washington University in St. Louis News and Information. https://source.wustl.edu/2006/07/teenager-moves-video-icons-just-by-imagination/
Gajawada, S. K. (2019). The Math behind Artificial Neural Networks. Towards Data Science. https://towardsdatascience.com/the-heart-of-artificial-neural-networks-26627e8c03ba
Gavrilova, Y. (2020). A guide to deep learning and neural networks. Serokell. https://serokell.io/blog/deep-learning-and-neural-network-guide
Graves, A., Liwicki, M., Fernández, S., Bertolami, R., Bunke, H., & Schmidhuber, J. (2008). A novel connectionist system for unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5), 855-868.
Gulati, T., Won, S. J., Ramanathan, D. S., Wong, C. C., Bodepudi, A., Swanson, R. A., & Ganguly, K. (2015). Robust neuroprosthetic control from the stroke perilesional cortex. Journal of Neuroscience, 35(22), 8653-8661.
Hardesty, L. (2017). Explained: Neural networks. MIT News, 14. Retrieved from https://news.mit.edu/2017/explained-neural-networks-0314
Hinton, G. E. (2009). Deep belief networks. Scholarpedia, 4(5), 5947.
Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527-1554.
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780.
Hyötyniemi, H. (1996). Turing machines are recurrent neural networks. In Proceedings of STEP 96 (pp. 15).
Kübler, A. (2020). The history of BCI: From a vision for the future to real support for personhood in people with locked-in syndrome. Neuroethics, 13(2), 163-180.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
Liu, J., Wu, G., Luo, Y., Qiu, S., Yang, S., Li, W., & Bi, Y. (2020). EEG-based emotion classification using a deep neural network and sparse autoencoder. Frontiers in Systems Neuroscience, 14, 43.
Liu, W., Zheng, W. L., & Lu, B. L. (2016). Emotion recognition using multimodal deep learning. In Neural Information Processing: 23rd International Conference, ICONIP 2016, Kyoto, Japan, October 16–21, 2016, Proceedings, Part II (pp. 521-529). Springer International Publishing.
Liu, Y., Sourina, O., & Nguyen, M. K. (2010, October). Real-time EEG-based human emotion recognition and visualization. In 2010 International Conference on Cyberworlds (pp. 262-269). IEEE.
Makin, J. G., Moses, D. A., & Chang, E. F. (2020). Machine translation of cortical activity to text with an encoder–decoder framework. Nature Neuroscience, 23(4), 575-582.
Marblestone, A. H., Wayne, G., & Kording, K. P. (2016). Toward an integration of deep learning and neuroscience. Frontiers in Computational Neuroscience, 10, 94.
Martini, M. L., Oermann, E. K., Opie, N. L., Panov, F., Oxley, T., & Yaeger, K. (2020). Sensor modalities for brain-computer interface technology: A comprehensive literature review. Neurosurgery, 86(2), E108-E117.
Miranda, R. A., Casebeer, W. D., Hein, A. M., Judy, J. W., Krotkov, E. P., Laabs, T. L., ... & Ling, G. S. (2015). DARPA-funded efforts in the development of novel brain–computer interface technologies. Journal of Neuroscience Methods, 244, 52-67.
Mostafa, M. S. M., Atia, A., & Abdulkader, S. N. (2015). Brain computer interfacing: Applications and challenges. Springer International Publishing.
Mouton, C., Myburgh, J. C., & Davel, M. H. (2020, December). Stride and translation invariance in CNNs. In Southern African Conference for Artificial Intelligence Research (pp. 267-281). Cham: Springer International Publishing.
Nicolas-Alonso, L. F., & Gomez-Gil, J. (2012). Brain computer interfaces, a review. Sensors, 12(2), 1211-1279.
Opie, N. (2021). The Stentrode TM neural interface system. In Brain-Computer Interface Research: A State-of-the-Art Summary 9 (pp. 127-132).
Pei, X., Barbour, D. L., Leuthardt, E. C., & Schalk, G. (2011). Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans. Journal of Neural Engineering, 8(4), 046028.
Polikov, V. S., Tresco, P. A., & Reichert, W. M. (2005). Response of brain tissue to chronically implanted neural electrodes. Journal of Neuroscience Methods, 148(1), 1-18.
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117.
Serruya, M., & Donoghue, J. (2003). Design principles of a neuromotor prosthetic device. In Neuroprosthetics: Theory and Practice (pp. 1158-1196).
Soldozy, S., Young, S., Kumar, J. S., Capek, S., Felbaum, D. R., Jean, W. C., ... & Syed, H. R. (2020). A systematic review of endovascular stent-electrode arrays, a minimally invasive approach to brain-machine interfaces. Neurosurgical Focus, 49(1), E3.
Sugiyama, S. (Ed.). (2019). Human Behavior and Another Kind in Consciousness: Emerging Research and Opportunities. CRC Press.
Teleb, M. S., Cziep, M. E., Lazzaro, M. A., Gheith, A., Asif, K., Remler, B., & Zaidat, O. O. (2014). Idiopathic intracranial hypertension: A systematic analysis of transverse sinus stenting. Interventional Neurology, 2(3), 132-143.
Valueva, M. V., Nagornov, N. N., Lyakhov, P. A., Valuev, G. V., & Chervyakov, N. I. (2020). Application of the residue number system to reduce hardware costs of the convolutional neural network implementation. Mathematics and Computers in Simulation, 177, 232-243.
Veena, N., & Anitha, N. (2020). A review of non-invasive BCI devices. International Journal of Biomedical Engineering and Technology, 34(3), 205-233.
Willett, F. R. (2021). A high-performance handwriting BCI. In Brain-Computer Interface Research: A State-of-the-Art Summary 10 (pp. 105-109).
Willett, F. R., Avansino, D. T., Hochberg, L. R., Henderson, J. M., & Shenoy, K. V. (2021). High-performance brain-to-text communication via handwriting. Nature, 593(7858), 249-254.
Williamson, W. G., Cooper, R., Allison, J., McCutcheon, E., & Whittlestone, W. (1964). Contingent negative variation: An electric sign of sensorimotor association and expectancy in the human brain. Nature, 203, 380-384.
Yanagisawa, T., Hirata, M., Saitoh, Y., Kishima, H., Matsushita, K., Goto, T., ... & Yoshimine, T. (2012). Electrocorticographic control of a prosthetic arm in paralyzed patients. Annals of Neurology, 71(3), 353-361.
Yang, Z. R., & Yang, Z. (2014). Comprehensive Biomedical Physics. Elsevier.
Zhang, W., Itoh, K., Tanida, J., & Ichioka, Y. (1990). Parallel distributed processing model with local space-invariant interconnections and its optical architecture. Applied Optics, 29(32), 4790-4797.
Zheng, W. L., & Lu, B. L. (2015). Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental Development, 7(3), 162-175.
Zheng, W. L., Liu, W., Lu, Y., Lu, B. L., & Cichocki, A. (2018). Emotionmeter: A multimodal framework for recognizing human emotions. IEEE Transactions on Cybernetics, 49(3), 1110-1122.
Zheng, W. L., Zhu, J. Y., Peng, Y., & Lu, B. L. (2014, July). EEG-based emotion classification using deep belief networks. In 2014 IEEE International Conference on Multimedia and Expo (ICME) (pp. 1-6). IEEE.
View Dimensions
View Altmetric
Save
Citation
View
Share