文章詳目資料

Journal of Computers EIMEDLINEScopus

  • 加入收藏
  • 下載文章
篇名 Human Activity Recognition with Multimodal Sensing of Wearable Sensors
卷期 32:6
作者 Chun-Mei MaHui ZhaoYing LiPan-Pan WuTao ZhangBo-Jue Wang
頁次 024-037
關鍵字 human activity recognitionmultimodal sensory datadiscriminative features representationwearable sensorsEIMEDLINEScopus
出刊日期 202112
DOI 10.53106/199115992021123206003

中文摘要

英文摘要

Human activity sensed by wearable sensors has multi-granularity data characteristics. Although deep learning-based approaches have greatly improved the accuracy of recognition, most of them mainly focus on designing new models to obtain deeper features, ignoring the different effects of different deep features on the accuracy of recognition. We think that discriminative features learning would improve the recognition performance. In this paper, we propose an end-to-end model ABLSTM that consists of Attention model and BLSTM model to recognize human activities. Specifically, the BLSTM model is used to extract deep features of various activities. After that, the Attention model is used to obtain the discriminative features representation by reducing the irrelevant features and enhancing the positive correlation features to each activity. Therefore, compared with traditional deep learning-based approaches, such as CNN and RNN based etc., the features learned by ABLSTM are more discriminative, which can be in response to the changes of activities. By testing our model on two public benchmark datasets: UCI and Opportunity. The results show that our model can well recognize human activities with F1 scores as high as 99.0% and 92.7% respectively on the two datasets, which pushes the state-of-the-art in human activities recognition of mobile sensing.

本卷期文章目次

相關文獻