Quality of Movement from Video

The automatic analsysis of data from video cameras in the home offers a great opportunity to determine quality metrics for common actions, such as stair-climbing or sitting-to-stand. For obvious reasons of privacy, SPHERE never gathers video data from people's homes - all the data is anonymised within the home by removing the background and replacing the person with their silhouette. The challenge therefore is how to compute the metrics of movement from "silhouette sensors" that have an incomplete, often occluded or non-facing view of the person.

One example of how silhouettes can be used is to study the transition from sitting to standing of the participants. Computer vision algorithms are used to recognise the sitting and standing positions based on the shape of the person and use that information to time each transition. This measurement, taken every day, can be a valuable indicator of general health or recovery from mobility-related conditions. Silhouettes can also be used to study the activity level of the participants, estimating the calories burnt during daily actions and tracking these over several months.

The figure below shows how this silhoette data is visualised:

Figure showing silhoette data. 650px wide.

There are usually three "Silhouette Sensors" installed, in a living room, hallway, and kitchen, and are off-the-shelf depth-perception cameras (Primesense Carmine and ASUS ProXion) that send data to a computer for processing the output into a silhouette. This is then forwarded on via Wi-Fi to the SPHERE Home Gateway hub.

The following video shows them in action:

Academic Lead:

Prof. Majid Mirmehdi

With:

Professor Dima Aldamen, Dr Tilo Burghardt, Dr Alessandro Masullo

Find out more about our research via the links below:

Publications

Perrett, T.; Masullo, A.; Burghardt, T.; Mirmehdi, M.; Damen, D.

 
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021

Perrett, T.; Masullo, A.; Burghardt, T.; Mirmehdi, M.; Damen, D.

Meta-Learning with Context-Agnostic Initialisations

Asian Conference on Computer Vision, 2020

Edit this page