The ample potential for the use of inertial body sensor networks (BSNs) that can capture, detect and recognize human movement in healthcare is glaring and has generated a lot of excitement over the last few years. Yet, perhaps unsurprisingly, for the machines to really reach their full potential, humans need to find ways to minimize their undesirable interference with the data. In other words, these systems rely on accurate data, and their efficacy is often blunted by the inaccuracies that come with the introduction of the human factor in their deployment.
Ambiguous instructions, as well as the (oftentimes) low level of comfort patients experience when wearing these devices correctly leads to mounting errors and insecure mounting, which, in turn, lead to inaccurate data and compromise the utility of the entire system. That is where recent work of researchers from the University of Virginia comes in. Jiaqi Gong, Philip Asare, John Lach and Yanjun Qi from the departments of Computer Science, and Computer and Electrical Engineering take the human factor in deployment of BSNs into consideration and propose a computational body-model framework called the Piecewise Linear Dynamical Model (PLDM) with Motion Stimulus Detection that accounts for the fact that segmentation of human actions requires considering different temporal scales for different purposes. Their method is quite immune to the issues that tend to arise due to human involvement. The model, presented at the Bodynets 2014, the 9th edition of the International Conference on Body Area Networks conference that took place in London in 2014, uses a simpler approach that, as demonstrated by experimental results, is more effective than the state-of-the-art methods for different temporal-scale applications the researchers have compared it to.
To full paper can be accessed here.