Crowdsourcing Dietary Data for Nutritional Assessment

Details

Summary

Dietary intake is difficult to accurately quantify, relying on pen and-paper or on-line tools for reporting consumption, which is burdensome and can be subject to bias. Innovation via the use of smartphone cameras to photograph meals has the potential to be a more convenient way to capture detailed dietary information. Computer science researchers are employing machine learning and image segmentation to automatically classify and quantify components of meal photographs but this task is extremely difficult owing to problems with lighting and clutter such that the correct food is identified around half of the time. Humans are therefore required to help, either the participants themselves have to provide extra information or experts are employed to identify foods and portion sizes from the photographs – a process that is very expensive and time consuming. Human beings are, however, extremely well-adapted to assess foods visually and an alternative to participants and experts is to harness the wisdom of crowds by designing a simple task for untrained crowd-workers to complete. Crowd-sourcing in science is a relatively new concept. Excellent and well-publicised examples include the classification of galaxies in astronomical images and the discovery of protein folding configurations. Crowd-sourcing in dietary assessment in the UK is novel so in this project we are creating a task suitable for crowds to identify food types and portion sizes from photographs of meals, which could significantly lower the cost and burden and improve the feasibility of collecting diet information in large cohorts and clinical trials.

Edit this page