Qiang Li, Jun Wang, Wucong Zhang, Qian Long Kweh
Recently, the method of using graph neural network based on skeletons for action recognition has become more and morepopular, due to the fact that a skeleton can carry very intuitive and rich action information, without being affected bybackground, light and other factors. The spatial–temporal graph convolutional neural network (ST-GCN) is a dynamicskeleton model that automatically learns spatial–temporal model from data, which not only has stronger expression ability,but also has stronger generalisation ability, showing remarkable results on public data sets. However, the ST-GCN networkdirectly learns the information of adjacent nodes (local information), and is insufficient in learning the relations of non-adjacent nodes (global information), such as clapping action that requires learning the related information of non-adjacentnodes. Therefore, this paper proposes an ST-GCN based on node attention (NA-STGCN), so as to solve the problemof insufficient global information in ST-GCN by introducing node attention module to explicitly model the interdepen-dence between global nodes. The experimental results on the NTU-RGB+D set show that the node attention module caneffectively improve the accuracy and feature representation ability of the existing algorithms, and obviously improve therecognition effect of the actions that need global information.
© 2008-2025 Fundación Dialnet · Todos los derechos reservados