[1] Patel C I, Labana D, Pandya S, et al. Histogram of oriented gradient-based fusion of features for human action recognition in action video sequences [J]. Sensors, 2020, 20(24): 7299. doi:  https://doi.org/10.3390/s20247299
[2] Ma Shiwei, Liu Lina, Fu Qi, et al. Using PHOG fusion features and multi-class Adaboost classifier for human behavior recognition [J]. Optics and Precision Engineering, 2018, 26(11): 2827-2837. (in Chinese) doi:  10.3788/OPE.20182611.2827
[3] Li Qinghui, Li Aihua, Cui Zhigao, et al. Action recognition via restricted dense trajectories and spatio-temporal co-occurrence feature [J]. Optics and Precision Engineering, 2018, 26(1): 230-237. (in Chinese)
[4] Sandhya R S, Apparao N G, Usha S V. Kinematic joint descriptor and depth motion descriptor with convolutional neural networks for human action recognition [J]. Materials Today: Proceedings, 2020, 37(2): 3164-3173. doi:  https://doi.org/10.1016/j.matpr.2020.09.052
[5] Pei Xiaomin, Fan Huijie, Tang Yandong. Action recognition method of spatio-temporal feature fusion deep learning network [J]. Infrared and Laser Engineering, 2018, 47(2): 0203007. (in Chinese) doi:  10.3788/IRLA201847.0203007
[6] Pei Xiaomin, Fan Huijie, Tang Yandong. Two-person interaction recognition based on multi-stream spatio-temporal fusion network [J]. Infrared and Laser Engineering, 2020, 49(5): 20190552. (in Chinese) doi:  10.3788/IRLA20190552
[7] Liu S Q, Zhang J C, Zhang Y Z, et al. A wearable motion capture device able to detect dynamic motion of human limbs [J]. Nature Communications, 2020, 11(1): 5615. doi:  https://doi.org/10.1038/s41467-020-19424-2
[8] Su Benyue, Zheng Dandan, Tang Qingfeng, et al. Human daily short-time activity recognition method driven by single sensor data [J]. Infrared and Laser Engineering, 2019, 48(2): 0226003. (in Chinese) doi:  10.3788/IRLA201948.0226003
[9] Wang Zhenyu, Zhang Lei. Deep convolutional and gated recurrent neural networks for sensor-based activity recognition [J]. Journal of Electronic Measurement and Instrumentation, 2020, 34(1): 1-9. (in Chinese)
[10] Wang Y, Jiang X L, Cao R Y, et al. Robust indoor human activity recognition using wireless signals [J]. Sensors, 2015, 15(7): 17195-208. doi:  10.3390/s150717195
[11] Liu Xiwen, Chen Haiming. Wi-ACR: a human action counting and recognition method based on CSI [J]. Jourmal of Beijing University of Posts and Telecommunications, 2020, 43(5): 105-111. (in Chinese)
[12] De P, Chatterjee A, Rakshit A. PIR sensor-based AAL tool for human movement detection: modified MCP-based dictionary learning approach [J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(10): 7377-7385. doi:  10.1109/TIM.2020.2981106
[13] Pourpanah F, Zhang B, Ma R, et al. Non-intrusive human motion recognition using distributed sparse sensors and the genetic algorithm based neural network[C]//2018 IEEE Sensors. IEEE, 2018: 1-4.
[14] Sun Q, Hu F. Dual-mode binary thermal sensing for indoor human scenario recognition with pyroelectric infrared sensors[C]//2019 IEEE Sensors. IEEE, 2019: 1-4.
[15] Guan Q, Li C, Qin L, et al. Daily activity recognition using pyroelectric infrared sensors and reference structures [J]. IEEE Sensors Journal, 2018, 19(5): 1645-1652.
[16] Yang Y, Yang H L, Liu Z X, et al. Fall detection system based on infrared array sensor and multi-dimensional feature fusion [J]. Measurement, 2022, 192: 110870. doi:  10.1016/j.measurement.2022.110870
[17] Lecun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition [J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. doi:  10.1109/5.726791
[18] Cho K, Van M B, Gulcehre C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[DB/OL]. (2014-06-03). https://arxiv.org/abs/1406.1078.
[19] Chung J, Gulcehre C, Cho K H, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[DB/OL]. https://arxiv.org/abs/1412.3555.
[20] Srivastava N, Hinton G, Krizhevsk A, et al. Dropout: A simple way to prevent neural networks from overfitting [J]. Journal of Machine Learning Research, 2014, 15(1): 1929-1958.