浏览全部资源
扫码关注微信
1. 西北大学信息科学与技术学院,陕西 西安710127
2. 浙江大学艺术与考古学院,浙江 杭州310028
[ "赵学荣(1999- ),男,西北大学信息科学与技术学院硕士生,主要研究方向为无线感知" ]
[ "王旋(1993- ),女,西北大学信息科学与技术学院博士生,主要研究方向为毫米波人体活动感知以及无源物联网" ]
[ "刘彤(1998- ),男,西北大学信息科学与技术学院硕士生,主要研究方向为无线感知" ]
[ "郑霞(1979- ),女,博士,浙江大学艺术与考古学院副教授,主要研究方向为博物馆信息化、智慧博物馆" ]
[ "江翼成(1997- ),男,浙江大学艺术与考古学院硕士生,主要研究方向为文化遗产数字化传播、智慧博物馆" ]
网络出版日期:2023-08,
纸质出版日期:2023-08-20
移动端阅览
赵学荣, 王旋, 刘彤, 等. 面向智慧博物馆的基于毫米波雷达稳健的手语识别[J]. 电信科学, 2023,39(8):109-117.
Xuerong ZHAO, Xuan WANG, Tong LIU, et al. mmWave radar based robust sign language recognition for the smart museum[J]. Telecommunications science, 2023, 39(8): 109-117.
赵学荣, 王旋, 刘彤, 等. 面向智慧博物馆的基于毫米波雷达稳健的手语识别[J]. 电信科学, 2023,39(8):109-117. DOI: 10.11959/j.issn.1000-0801.2023144.
Xuerong ZHAO, Xuan WANG, Tong LIU, et al. mmWave radar based robust sign language recognition for the smart museum[J]. Telecommunications science, 2023, 39(8): 109-117. DOI: 10.11959/j.issn.1000-0801.2023144.
智慧博物馆是利用物联网、人工智能等设备或技术,构建人、物、空间信息交互通道的博物馆新形态。手语识别技术既能让听障语障观众无障碍参观博物馆,也有助于解析观众自然状态下的手势互动。然而,基于摄像头或可穿戴设备的方法在博物馆中可能有隐私安全或使用不便等问题。提出一种基于毫米波雷达稳健的手语识别方法,首先提取不同手势相对于雷达距离和速度随时间变化的特征,其次采用基于物理意义的增强处理,最后设计残差网络进一步剔除两种特征预处理后的与环境相关信息,对其进行特征融合并实现分类。实验表明,该方法可以有效识别手语,在测试环境和用户位置改变时也能达到平均 90%以上的精度,为智慧博物馆的手语手势识别提供了一种新方法。
A smart museum is a new form of a museum
which uses devices or technologies including the Internet of things (IoT) and artificial intelligence (AI) to build the information interaction channels between people
things
and space.Sign language recognition not only assists the visitors who have hearing or speech impairment to visit the museum without barriers but also helps study the visitors’ natural gesture interaction.However
the methods based on cameras and wearable devices mayhave issues like privacy or usability when applied to museum spaces.Therefore
a robust sign language recognition method based on millimeter-wave radar was proposed.Different features of distance and velocity changes relative to the radar device were firstly extracted in this method
then a physical data enhancement method was adopted to expand the training data.Finally
a ResNet based on the pre-processed distance time features and Doppler time features was designed to further remove the environment-related information and perform feature fusion for classification.Experimental results show that this method can effectively recognize sign language and achieve an averaged recognition accuracy of over 90% when the testing environment and the user's location change
providing a new method for smart museum wireless sign language and gesture recognition.
王春法 . 关于智慧博物馆建设的若干思考 [J ] . 博物馆管理 , 2020 ( 3 ): 4 - 15 .
WANG C F . Some thoughts on the smart museum construction [J ] . Museum Management , 2020 ( 3 ): 4 - 15 .
DI GIUSEPPANTONIO DI FRANCO P , MATTHEWS J L , MATLOCK T . Framing the past:how virtual experience affects bodily description of artefacts [J ] . Journal of Cultural Heritage , 2016 ( 17 ): 179 - 187 .
STEIER R , PIERROUX P , KRANGE I . Embodied interpretation:gesture,social interaction,and meaning making in a national art museum [J ] . Learning,Culture and Social Interaction , 2015 ( 7 ): 28 - 42 .
PUGEAULT N , BOWDEN R . Spelling it out:real-time ASL fingerspelling recognition [C ] // Proceedings of 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops) . Piscataway:IEEE Press , 2011 : 1114 - 1119 .
MOHANDES M , ALIYU S , DERICHE M . Arabic sign language recognition using the leap motion controller [C ] // Proceedings of 2014 IEEE 23rd International Symposium on Industrial Electronics (ISIE) . Piscataway:IEEE Press , 2014 : 960 - 965 .
CAO D , LEU M C , YIN Z Z . American sign language alphabet recognition using Microsoft Kinect [C ] // Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) . Piscataway:IEEE Press , 2015 : 44 - 52 .
YALOWITZ S S , BRONNENKANT K . Timing and tracking:unlocking visitor behavior [J ] . Visitor Studies , 2009 , 12 ( 1 ): 47 - 64 .
LI K H , ZHOU Z Y , LEE C H . Sign transition modeling and a scalable solution to continuous sign language recognition for real-world applications [J ] . ACM Transactions on Accessible Computing , 2016 , 8 ( 2 ): 1 - 23 .
WU J , SUN L , JAFARI R . A wearable system for recognizing American sign language in real-time using IMU and surface EMG sensors [J ] . IEEE Journal of Biomedical and Health Informatics , 2016 , 20 ( 5 ): 1281 - 1290 .
YANG X D , CHEN X , CAO X , et al . Chinese sign language recognition based on an optimized tree-structure framework [J ] . IEEE Journal of Biomedical and Health Informatics , 2016 , 21 ( 4 ): 994 - 1004 .
ABDELNASSER H , YOUSSEF M , HARRAS K A . WiGest:a ubiquitous Wi-Fi-based gesture recognition system [C ] // Proceedings of 2015 IEEE Conference on Computer Communications (INFOCOM) . Piscataway:IEEE Press , 2015 : 1472 - 1480 .
MA Y S , ZHOU G , WANG S Q , et al . SignFi:sign language recognition using Wi-Fi [J ] . Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies , 2018 , 2 ( 1 ): 1 - 21 .
ZHENG Y , ZHANG Y , QIAN K , et al . Zero-effort cross-domain gesture recognition with Wi-Fi [C ] // Proceedings of the 17th Annual International Conference on Mobile Systems,Applications,and Services . New York:ACM Press , 2019 : 313 - 325 .
ZHANG L , ZHANG Y X , ZHENG X L . WiSign:ubiquitous American sign language recognition using commercial Wi-Fi devices [J ] . ACM Transactions on Intelligent Systems and Technology , 2020 , 11 ( 3 ): 1 - 24 .
WU C S , ZHANG F , WANG B B , et al . mSense:towards mobile material sensing with a single millimeter-wave radio [J ] . Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies , 2020 , 4 ( 3 ): 1 - 20 .
LI H , YANG W , WANG J X , et al . WiFinger:talk to your smart devices with finger-grained gesture [C ] // Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing . New York:ACM Press , 2016 : 250 - 261 .
YAN B , WANG P , DU L , et al . mmGesture:semi-supervised gesture recognition system using mmWave radar [J ] . Expert Systems with Applications , 2023 ( 213 ): 119042 .
LIU H , ZHOU A , DONG Z , et al . M-gesture:person-independent real-time in-air gesture recognition using commodity millimeter wave radar [J ] . IEEE Internet of Things Journal , 2021 , 9 ( 5 ): 3397 - 3415 .
LI Y D , ZHANG D H , CHEN J B , et al . DI-gesture:domain-independent and real-time gesture recognition with millimeter-wave signals [C ] // Proceedings of GLOBECOM 2022 2022 IEEE Global Communications Conference . Piscataway:IEEE Press , 2022 : 5007 - 5012 .
SANTHALINGAM P S , HOSAIN A A , ZHANG D , et al . mmASL:environment-independent ASL gesture recognition using 60 GHz millimeter-wave signals [J ] . Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies , 2020 , 4 ( 1 ): 1 - 30 .
SANTHALINGAM P S , DU Y Q , WILKERSON R , et al . Expressive ASL recognition using millimeter-wave wireless signals [C ] // Proceedings of 2020 17th Annual IEEE International Conference on Sensing,Communication,and Networking (SECON) . Piscataway:IEEE Press , 2020 : 1 - 9 .
李攀攀 , 谢正霞 , 乐光学 , 等 . 基于深度学习的无线通信接收方法研究进展与趋势 [J ] . 电信科学 , 2022 , 38 ( 2 ): 1 - 17 .
LI P P , XIE Z X , YUE G X , et al . Research progress and trends of deep learning based wireless communication receiving method [J ] . Telecommunications Science , 2022 , 38 ( 2 ): 1 - 17 .
HE K M , ZHANG X Y , REN S Q , et al . Deep residual learning for image recognition [C ] // Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway:IEEE Press , 2016 : 770 - 778 .
SRIVASTAVA N , HINTON G , KRIZHEVSKY A , et al . Dropout:a simple way to prevent neural networks from overfitting [J ] . Journal of Machine Learning Research , 2014 , 15 ( 1 ): 1929 - 1958 .
KINGMA D P , BA J . Adam:a method for stochastic optimization [J ] . arXiv preprint , 2014 ,arXiv:1412.6980.
0
浏览量
253
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构