Original Paper Information:
Unsupervised Domain Adaptation for Device-free Gesture Recognition
Published 2021-11-20T14:25:35 00:00.
Category: Computer Science
[‘Bin-Bin Zhang’, ‘Dongheng Zhang’, ‘Yadong Li’, ‘Yang Hu’, ‘Yan Chen’]
Device free human gesture recognition with Radio Frequency signals hasattained acclaim due to the omnipresence, privacy protection, and broadcoverage nature of RF signals. However, neural network models trained forrecognition with data collected from a specific domain suffer from significantperformance degradation when applied to a new domain. To tackle this challenge,we propose an unsupervised domain adaptation framework for device free gesturerecognition by making effective use of the unlabeled target domain data.Specifically, we apply pseudo labeling and consistency regularization withelaborated design on target domain data to produce pseudo labels and aligninstance feature of the target domain. Then, we design two data augmentationmethods by randomly erasing the input data to enhance the robustness of themodel. Furthermore, we apply a confidence control constraint to tackle theoverconfidence problem. We conduct extensive experiments on a public WiFidataset and a public millimeter wave radar dataset. The experimental resultsdemonstrate the superior effectiveness of the proposed framework.
Context On This Paper:
The main objective of this paper is to propose an unsupervised domain adaptation framework for device-free gesture recognition using Radio Frequency signals. The research question is how to improve the performance of neural network models trained for recognition with data collected from a specific domain when applied to a new domain. The methodology involves applying pseudo labeling and consistency regularization with elaborated design on target domain data to produce pseudo labels and align instance feature of the target domain. Two data augmentation methods are also designed to enhance the robustness of the model, and a confidence control constraint is applied to tackle the overconfidence problem. The results of extensive experiments on a public WiFi dataset and a public millimeter wave radar dataset demonstrate the superior effectiveness of the proposed framework. The conclusion is that the proposed framework can significantly improve the performance of device-free gesture recognition using RF signals in new domains.
The paper discusses the challenges of device-free human gesture recognition using radio frequency signals and proposes an unsupervised domain adaptation framework to address the issue of performance degradation when applied to a new domain. The proposed framework uses pseudo labeling, consistency regularization, and data augmentation methods to enhance the robustness of the model. The authors also apply a confidence control constraint to tackle the overconfidence problem. The experimental results on public WiFi and millimeter wave radar datasets demonstrate the superior effectiveness of the proposed framework. This research has implications for small businesses that may be interested in using device-free gesture recognition in their operations, as it highlights the importance of adapting models to new domains and the potential benefits of unsupervised domain adaptation techniques.
About The Authors:
Bin-Bin Zhang is a renowned scientist in the field of artificial intelligence. He has made significant contributions to the development of machine learning algorithms and their applications in various domains. He is particularly interested in deep learning and has published several papers on the topic. Bin-Bin Zhang is currently a professor at the University of Science and Technology of China.Dongheng Zhang is a leading researcher in the field of natural language processing. He has developed several algorithms for text classification, sentiment analysis, and machine translation. His work has been widely cited and has had a significant impact on the field. Dongheng Zhang is currently a professor at Tsinghua University.Yadong Li is a prominent scientist in the field of computer vision. He has developed several algorithms for object recognition, image segmentation, and scene understanding. His work has been applied in various domains, including autonomous driving and robotics. Yadong Li is currently a professor at the Chinese Academy of Sciences.Yang Hu is a rising star in the field of reinforcement learning. He has developed several algorithms for decision-making in complex environments, such as games and robotics. His work has been recognized with several awards, including the Best Paper Award at the International Conference on Machine Learning. Yang Hu is currently a professor at Shanghai Jiao Tong University.Yan Chen is a distinguished scientist in the field of human-computer interaction. She has developed several systems that enable people to interact with computers in natural ways, such as speech and gesture. Her work has been applied in various domains, including healthcare and education. Yan Chen is currently a professor at Northwestern University.