Publications



2020

(C)
Christos Diou, Ioannis Sarafis, Vasileios Papapanagiotou, Leonidas Alagialoglou, Irini Lekka, Dimitrios Filos, Leandros Stefanopoulos, Vasileios Kilintzis, Christos Maramis, Youla Karavidopoulou, Nikos Maglaveras, Ioannis Ioakimidis, Evangelia Charmandari, Penio Kassari, Athanasia Tragomalou, Monica Mars, Thien-An Ngoc Nguyen, Tahar Kechadi, Shane O' Donnell, Gerardine Doyle, Sarah Browne, Grace O' Malley, Rachel Heimeier, Katerina Riviou, Evangelia Koukoula, Konstantinos Filis, Maria Hassapidou, Ioannis Pagkalos, Daniel Ferri, Isabel Pérez and Anastasios Delopoulos
"BigO: A public health decision support system for measuring obesogenic behaviors of children in relation to their local environment"
42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, 2020 May
[Abstract][BibTex][pdf]

Obesity is a complex disease and its prevalence depends on multiple factors related to the local socioeconomic, cultural and urban context of individuals. Many obesity prevention strategies and policies, however, are horizontal measures that do not depend on context-specific evidence. In this paper we present an overview of BigO, a system designed to collect objective behavioral data from children and adolescent populations as well as their environment in order to support public health authorities in formulating effective, context-specific policies and interventions addressing childhood obesity. We present an overview of the data acquisition, indicator extraction, data exploration and analysis components of the BigO system, as well as an account of its preliminary pilot application in 33 schools and 2 clinics in four European countries, involving over 4,200 participants.

@inproceedings{diou2020bigo,
author={Christos Diou and Ioannis Sarafis and Vasileios Papapanagiotou and Leonidas Alagialoglou and Irini Lekka and Dimitrios Filos and Leandros Stefanopoulos and Vasileios Kilintzis and Christos Maramis and Youla Karavidopoulou and Nikos Maglaveras and Ioannis Ioakimidis and Evangelia Charmandari and Penio Kassari and Athanasia Tragomalou and Monica Mars and Thien-An Ngoc Nguyen and Tahar Kechadi and Shane O' Donnell and Gerardine Doyle and Sarah Browne and Grace O' Malley and Rachel Heimeier and Katerina Riviou and Evangelia Koukoula and Konstantinos Filis and Maria Hassapidou and Ioannis Pagkalos and Daniel Ferri and Isabel Pérez and Anastasios Delopoulos},
title={BigO: A public health decision support system for measuring obesogenic behaviors of children in relation to their local environment},
booktitle={42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
publisher={IEEE},
year={2020},
month={05},
date={2020-05-06},
url={https://arxiv.org/pdf/2005.02928.pdf},
abstract={Obesity is a complex disease and its prevalence depends on multiple factors related to the local socioeconomic, cultural and urban context of individuals. Many obesity prevention strategies and policies, however, are horizontal measures that do not depend on context-specific evidence. In this paper we present an overview of BigO, a system designed to collect objective behavioral data from children and adolescent populations as well as their environment in order to support public health authorities in formulating effective, context-specific policies and interventions addressing childhood obesity. We present an overview of the data acquisition, indicator extraction, data exploration and analysis components of the BigO system, as well as an account of its preliminary pilot application in 33 schools and 2 clinics in four European countries, involving over 4,200 participants.}
}

(C)
Vasileios Papapanagiotou, Ioannis Sarafis, Christos Diou, Ioannis Ioakimidis, Evangelia Charmandari and Anastasios Delopoulos
"Collecting big behavioral data for measuring behavior against obesity"
42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2020 May
[Abstract][BibTex][pdf]

Obesity is currently affecting very large portions of the global population. Effective prevention and treatment starts at the early age and requires objective knowledge of population-level behavior on the region/neighborhood scale. To this end, we present a system for extracting and collecting behavioral information on the individual-level objectively and automatically. The behavioral information is related to physical activity, types of visited places, and transportation mode used between them. The system employs indicator-extraction algorithms from the literature which we evaluate on publicly available datasets. The system has been developed and integrated in the context of the EU-funded BigO project that aims at preventing obesity in young populations.

@inproceedings{papapanagiotou2020collecting,
author={Vasileios Papapanagiotou and Ioannis Sarafis and Christos Diou and Ioannis Ioakimidis and Evangelia Charmandari and Anastasios Delopoulos},
title={Collecting big behavioral data for measuring behavior against obesity},
booktitle={42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
year={2020},
month={05},
date={2020-05-11},
url={https://arxiv.org/pdf/2005.04928.pdf},
abstract={Obesity is currently affecting very large portions of the global population. Effective prevention and treatment starts at the early age and requires objective knowledge of population-level behavior on the region/neighborhood scale. To this end, we present a system for extracting and collecting behavioral information on the individual-level objectively and automatically. The behavioral information is related to physical activity, types of visited places, and transportation mode used between them. The system employs indicator-extraction algorithms from the literature which we evaluate on publicly available datasets. The system has been developed and integrated in the context of the EU-funded BigO project that aims at preventing obesity in young populations.}
}

(C)
Ioannis Sarafis, Christos Diou, Vasileios Papapanagiotou, Leonidas Alagialoglou and Anastasios Delopoulos
"Inferring the Spatial Distribution of Physical Activity in Children Population from Characteristics of the Environment"
42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, 2020 May
[Abstract][BibTex][pdf]

Obesity affects a rising percentage of the children and adolescent population, contributing to decreased quality of life and increased risk for comorbidities. Although the major causes of obesity are known, the obesogenic behaviors manifest as a result of complex interactions of the individual with the living environment. For this reason, addressing childhood obesity remains a challenging problem for public health authorities. The BigO project relies on large-scale behavioral and environmental data collection to create tools that support policy making and intervention design. In this work, we propose a novel analysis approach for modeling the expected population behavior as a function of the local environment. We experimentally evaluate this approach in predicting the expected physical activity level in small geographic regions using urban environment characteristics. Experiments on data collected from 156 children and adolescents verify the potential of the proposed approach. Specifically, we train models that predict the physical activity level in a region, achieving 81% leave-one-out accuracy. In addition, we exploit the model predictions to automatically visualize heatmaps of the expected population behavior in areas of interest, from which we draw useful insights. Overall, the predictive models and the automatic heatmaps are promising tools in gaining direct perception for the spatial distribution of the population's behavior, with potential uses by public health authorities.

@conference{sarafis2020inferring,
author={Ioannis Sarafis and Christos Diou and Vasileios Papapanagiotou and Leonidas Alagialoglou and Anastasios Delopoulos},
title={Inferring the Spatial Distribution of Physical Activity in Children Population from Characteristics of the Environment},
booktitle={42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
publisher={IEEE},
year={2020},
month={05},
date={2020-05-08},
url={https://arxiv.org/pdf/2005.03957.pdf},
abstract={Obesity affects a rising percentage of the children and adolescent population, contributing to decreased quality of life and increased risk for comorbidities. Although the major causes of obesity are known, the obesogenic behaviors manifest as a result of complex interactions of the individual with the living environment. For this reason, addressing childhood obesity remains a challenging problem for public health authorities. The BigO project relies on large-scale behavioral and environmental data collection to create tools that support policy making and intervention design. In this work, we propose a novel analysis approach for modeling the expected population behavior as a function of the local environment. We experimentally evaluate this approach in predicting the expected physical activity level in small geographic regions using urban environment characteristics. Experiments on data collected from 156 children and adolescents verify the potential of the proposed approach. Specifically, we train models that predict the physical activity level in a region, achieving 81% leave-one-out accuracy. In addition, we exploit the model predictions to automatically visualize heatmaps of the expected population behavior in areas of interest, from which we draw useful insights. Overall, the predictive models and the automatic heatmaps are promising tools in gaining direct perception for the spatial distribution of the population\'s behavior, with potential uses by public health authorities.}
}

2019

(J)
Christos Diou, Ioannis Sarafis, Vasileios Papapanagiotou, Ioannis Ioakimidis and Anastasios Delopoulos
Statistical Journal of the IAOS, 35, (4), pp. 677-690, 2019 Dec
[Abstract][BibTex][pdf]

The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data. BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data.

@article{DiouIAOS2019,
author={Christos Diou and Ioannis Sarafis and Vasileios Papapanagiotou and Ioannis Ioakimidis and Anastasios Delopoulos},
title={A methodology for obtaining objective measurements of population obesogenic behaviors in relation to the environment},
journal={Statistical Journal of the IAOS},
volume={35},
number={4},
pages={677-690},
year={2019},
month={12},
date={2019-12-10},
url={https://arxiv.org/pdf/1911.08315.pdf},
doi={http://10.3233/SJI-190537},
abstract={The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data. BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data.}
}

(J)
Langlet, Billy, Fagerberg, Petter, Delopoulos, Anastasios, Papapanagiotou, Vasileios, Diou, Christos, Maramis, Christos, Maglaveras, Nikolaos, Anvret, Anna, Ioakimidis and Ioannis
Nutrients, 11, (3), pp. 672, 2019 Mar
[Abstract][BibTex][pdf]

Large portion sizes and a high eating rate are associated with high energy intake and obesity. Most individuals maintain their food intake weight (g) and eating rate (g/min) rank in relation to their peers, despite food and environmental manipulations. Single meal measures may enable identification of “large portion eaters” and “fast eaters,” finding individuals at risk of developing obesity. The aim of this study was to predict real-life food intake weight and eating rate based on one school lunch. Twenty-four high-school students with a mean (±SD) age of 16.8 yr (±0.7) and body mass index of 21.9 (±4.1) were recruited, using no exclusion criteria. Food intake weight and eating rate was first self-rated (“Less,” “Average” or “More than peers”), then objectively recorded during one school lunch (absolute weight of consumed food in grams). Afterwards, subjects recorded as many main meals (breakfasts, lunches and dinners) as possible in real-life for a period of at least two weeks, using a Bluetooth connected weight scale and a smartphone application. On average participants recorded 18.9 (7.3) meals during the study. Real-life food intake weight was 327.4 g (±110.6), which was significantly lower (p = 0.027) than the single school lunch, at 367.4 g (±167.2). When the intra-class correlation of food weight intake between the objectively recorded real-life and school lunch meals was compared, the correlation was excellent (R = 0.91). Real-life eating rate was 33.5 g/min (±14.8), which was significantly higher (p = 0.010) than the single school lunch, at 27.7 g/min (±13.3). The intra-class correlation of the recorded eating rate between real-life and school lunch meals was very large (R = 0.74). The participants’ recorded food intake weights and eating rates were divided into terciles and compared between school lunches and real-life, with moderate or higher agreement (? = 0.75 and ? = 0.54, respectively). In contrast, almost no agreement was observed between self-rated and real-life recorded rankings of food intake weight and eating rate (? = 0.09 and ? = 0.08, respectively). The current study provides evidence that both food intake weight and eating rates per meal vary considerably in real-life per individual. However, based on these behaviours, most students can be correctly classified in regard to their peers based on single school lunches. In contrast, self-reported food intake weight and eating rate are poor predictors of real-life measures. Finally, based on the recorded individual variability of real-life food intake weight and eating rate, it is not advised to rank individuals based on single recordings collected in real-life settings

@article{Langlet2019Predicting,
author={Langlet and Billy and Fagerberg and Petter and Delopoulos and Anastasios and Papapanagiotou and Vasileios and Diou and Christos and Maramis and Christos and Maglaveras and Nikolaos and Anvret and Anna and Ioakimidis and Ioannis},
title={Predicting Real-Life Eating Behaviours Using Single School Lunches in Adolescents},
journal={Nutrients},
volume={11},
number={3},
pages={672},
year={2019},
month={03},
date={2019-03-20},
url={https://www.mdpi.com/2072-6643/11/3/672/pdf},
doi={https://doi.org/10.3390/nu11030672},
abstract={Large portion sizes and a high eating rate are associated with high energy intake and obesity. Most individuals maintain their food intake weight (g) and eating rate (g/min) rank in relation to their peers, despite food and environmental manipulations. Single meal measures may enable identification of “large portion eaters” and “fast eaters,” finding individuals at risk of developing obesity. The aim of this study was to predict real-life food intake weight and eating rate based on one school lunch. Twenty-four high-school students with a mean (±SD) age of 16.8 yr (±0.7) and body mass index of 21.9 (±4.1) were recruited, using no exclusion criteria. Food intake weight and eating rate was first self-rated (“Less,” “Average” or “More than peers”), then objectively recorded during one school lunch (absolute weight of consumed food in grams). Afterwards, subjects recorded as many main meals (breakfasts, lunches and dinners) as possible in real-life for a period of at least two weeks, using a Bluetooth connected weight scale and a smartphone application. On average participants recorded 18.9 (7.3) meals during the study. Real-life food intake weight was 327.4 g (±110.6), which was significantly lower (p = 0.027) than the single school lunch, at 367.4 g (±167.2). When the intra-class correlation of food weight intake between the objectively recorded real-life and school lunch meals was compared, the correlation was excellent (R = 0.91). Real-life eating rate was 33.5 g/min (±14.8), which was significantly higher (p = 0.010) than the single school lunch, at 27.7 g/min (±13.3). The intra-class correlation of the recorded eating rate between real-life and school lunch meals was very large (R = 0.74). The participants’ recorded food intake weights and eating rates were divided into terciles and compared between school lunches and real-life, with moderate or higher agreement (? = 0.75 and ? = 0.54, respectively). In contrast, almost no agreement was observed between self-rated and real-life recorded rankings of food intake weight and eating rate (? = 0.09 and ? = 0.08, respectively). The current study provides evidence that both food intake weight and eating rates per meal vary considerably in real-life per individual. However, based on these behaviours, most students can be correctly classified in regard to their peers based on single school lunches. In contrast, self-reported food intake weight and eating rate are poor predictors of real-life measures. Finally, based on the recorded individual variability of real-life food intake weight and eating rate, it is not advised to rank individuals based on single recordings collected in real-life settings}
}

2018

(J)
Janet van den Boer, Annemiek van der Lee, Lingchuan Zhou, Vasileios Papapanagiotou, Christos Diou, Anastasios Delopoulos and Monica Mars
The SPLENDID Eating Detection Sensor: Development and Feasibility Study, 6, (9), pp. 170, 2018 Sep
[Abstract][BibTex]

The available methods for monitoring food intake---which for a great part rely on self-report---often provide biased and incomplete data. Currently, no good technological solutions are available. Hence, the SPLENDID eating detection sensor (an ear-worn device with an air microphone and a photoplethysmogram [PPG] sensor) was developed to enable complete and objective measurements of eating events. The technical performance of this device has been described before. To date, literature is lacking a description of how such a device is perceived and experienced by potential users. Objective: The objective of our study was to explore how potential users perceive and experience the SPLENDID eating detection sensor. Methods: Potential users evaluated the eating detection sensor at different stages of its development: (1) At the start, 12 health professionals (eg, dieticians, personal trainers) were interviewed and a focus group was held with 5 potential end users to find out their thoughts on the concept of the eating detection sensor. (2) Then, preliminary prototypes of the eating detection sensor were tested in a laboratory setting where 23 young adults reported their experiences. (3) Next, the first wearable version of the eating detection sensor was tested in a semicontrolled study where 22 young, overweight adults used the sensor on 2 separate days (from lunch till dinner) and reported their experiences. (4) The final version of the sensor was tested in a 4-week feasibility study by 20 young, overweight adults who reported their experiences. Results: Throughout all the development stages, most individuals were enthusiastic about the eating detection sensor. However, it was stressed multiple times that it was critical that the device be discreet and comfortable to wear for a longer period. In the final study, the eating detection sensor received an average grade of 3.7 for wearer comfort on a scale of 1 to 10. Moreover, experienced discomfort was the main reason for wearing the eating detection sensor <2 hours a day. The participants reported having used the eating detection sensor on 19/28 instructed days on average. Conclusions: The SPLENDID eating detection sensor, which uses an air microphone and a PPG sensor, is a promising new device that can facilitate the collection of reliable food intake data, as shown by its technical potential. Potential users are enthusiastic, but to be successful wearer comfort and discreetness of the device need to be improved.

@article{2018Boer,
author={Janet van den Boer and Annemiek van der Lee and Lingchuan Zhou and Vasileios Papapanagiotou and Christos Diou and Anastasios Delopoulos and Monica Mars},
title={The SPLENDID Eating Detection Sensor: Development and Feasibility Study},
journal={The SPLENDID Eating Detection Sensor: Development and Feasibility Study},
volume={6},
number={9},
pages={170},
year={2018},
month={09},
date={2018-09-04},
doi={https://doi.org/10.2196/mhealth.9781},
issn={2291-5222},
abstract={The available methods for monitoring food intake---which for a great part rely on self-report---often provide biased and incomplete data. Currently, no good technological solutions are available. Hence, the SPLENDID eating detection sensor (an ear-worn device with an air microphone and a photoplethysmogram [PPG] sensor) was developed to enable complete and objective measurements of eating events. The technical performance of this device has been described before. To date, literature is lacking a description of how such a device is perceived and experienced by potential users. Objective: The objective of our study was to explore how potential users perceive and experience the SPLENDID eating detection sensor. Methods: Potential users evaluated the eating detection sensor at different stages of its development: (1) At the start, 12 health professionals (eg, dieticians, personal trainers) were interviewed and a focus group was held with 5 potential end users to find out their thoughts on the concept of the eating detection sensor. (2) Then, preliminary prototypes of the eating detection sensor were tested in a laboratory setting where 23 young adults reported their experiences. (3) Next, the first wearable version of the eating detection sensor was tested in a semicontrolled study where 22 young, overweight adults used the sensor on 2 separate days (from lunch till dinner) and reported their experiences. (4) The final version of the sensor was tested in a 4-week feasibility study by 20 young, overweight adults who reported their experiences. Results: Throughout all the development stages, most individuals were enthusiastic about the eating detection sensor. However, it was stressed multiple times that it was critical that the device be discreet and comfortable to wear for a longer period. In the final study, the eating detection sensor received an average grade of 3.7 for wearer comfort on a scale of 1 to 10. Moreover, experienced discomfort was the main reason for wearing the eating detection sensor <2 hours a day. The participants reported having used the eating detection sensor on 19/28 instructed days on average. Conclusions: The SPLENDID eating detection sensor, which uses an air microphone and a PPG sensor, is a promising new device that can facilitate the collection of reliable food intake data, as shown by its technical potential. Potential users are enthusiastic, but to be successful wearer comfort and discreetness of the device need to be improved.}
}

(J)
Maryam Esfandiari, Vasilis Papapanagiotou, Christos Diou, Modjtaba Zandian, Jenny Nolstam, Per Södersten and Cecilia Bergh
JoVE, (135), 2018 May
[Abstract][BibTex]

Subjects eat food from a plate that sits on a scale connected to a computer that records the weight loss of the plate during the meal and makes up a curve of food intake, meal duration and rate of eating modeled by a quadratic equation. The purpose of the method is to change eating behavior by providing visual feedback on the computer screen that the subject can adapt to because her/his own rate of eating appears on the screen during the meal. The data generated by the method is automatically analyzed and fitted to the quadratic equation using a custom made algorithm. The method has the advantage of recording eating behavior objectively and offers the possibility of changing eating behavior both in experiments and in clinical practice. A limitation may be that experimental subjects are affected by the method. The same limitation may be an advantage in clinical practice, as eating behavior is more easily stabilized by the method. A treatment that uses this method has normalized body weight and restored the health of several hundred patients with anorexia nervosa and other eating disorders and has reduced the weight and improved the health of severely overweight patients.

@article{Esfandiari2018,
author={Maryam Esfandiari and Vasilis Papapanagiotou and Christos Diou and Modjtaba Zandian and Jenny Nolstam and Per Södersten and Cecilia Bergh},
title={Control of Eating Behavior Using a Novel Feedback System},
journal={JoVE},
number={135},
year={2018},
month={05},
date={2018-05-08},
doi={http://10.3791/57432},
abstract={Subjects eat food from a plate that sits on a scale connected to a computer that records the weight loss of the plate during the meal and makes up a curve of food intake, meal duration and rate of eating modeled by a quadratic equation. The purpose of the method is to change eating behavior by providing visual feedback on the computer screen that the subject can adapt to because her/his own rate of eating appears on the screen during the meal. The data generated by the method is automatically analyzed and fitted to the quadratic equation using a custom made algorithm. The method has the advantage of recording eating behavior objectively and offers the possibility of changing eating behavior both in experiments and in clinical practice. A limitation may be that experimental subjects are affected by the method. The same limitation may be an advantage in clinical practice, as eating behavior is more easily stabilized by the method. A treatment that uses this method has normalized body weight and restored the health of several hundred patients with anorexia nervosa and other eating disorders and has reduced the weight and improved the health of severely overweight patients.}
}

(J)
Vasilis Papapanagiotou, Christos Diou, Ioannis Ioakimidis, Per Sodersten and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics, PP, (99), pp. 1-1, 2018 Mar
[Abstract][BibTex][pdf]

The structure of the cumulative food intake (CFI) curve has been associated with obesity and eating disorders. Scales that record the weight loss of a plate from which a subject eats food are used for capturing this curve; however, their measurements are contaminated by additive noise and are distorted by certain types of artifacts. This paper presents an algorithm for automatically processing continuous in-meal weight measurements in order to extract the clean CFI curve and in-meal eating indicators, such as total food intake and food intake rate. The algorithm relies on the representation of the weight-time series by a string of symbols that correspond to events such as bites or food additions. A context-free grammar is next used to model a meal as a sequence of such events. The selection of the most likely parse tree is finally used to determine the predicted eating sequence. The algorithm is evaluated on a dataset of 113 meals collected using the Mandometer, a scale that continuously samples plate weight during eating. We evaluate the effectiveness for seven indicators, and for bite-instance detection. We compare our approach with three state-of-the-art algorithms, and achieve the lowest error rates for most indicators (24 g for total meal weight). The proposed algorithm extracts the parameters of the CFI curve automatically, eliminating the need for manual data processing, and thus facilitating large-scale studies of eating behavior.

@article{Vassilis2018,
author={Vasilis Papapanagiotou and Christos Diou and Ioannis Ioakimidis and Per Sodersten and Anastasios Delopoulos},
title={Automatic analysis of food intake and meal microstructure based on continuous weight measurements},
journal={IEEE Journal of Biomedical and Health Informatics},
volume={PP},
number={99},
pages={1-1},
year={2018},
month={03},
date={2018-03-05},
url={http://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2018automated.pdf},
doi={http://10.1109/JBHI.2018.2812243},
abstract={The structure of the cumulative food intake (CFI) curve has been associated with obesity and eating disorders. Scales that record the weight loss of a plate from which a subject eats food are used for capturing this curve; however, their measurements are contaminated by additive noise and are distorted by certain types of artifacts. This paper presents an algorithm for automatically processing continuous in-meal weight measurements in order to extract the clean CFI curve and in-meal eating indicators, such as total food intake and food intake rate. The algorithm relies on the representation of the weight-time series by a string of symbols that correspond to events such as bites or food additions. A context-free grammar is next used to model a meal as a sequence of such events. The selection of the most likely parse tree is finally used to determine the predicted eating sequence. The algorithm is evaluated on a dataset of 113 meals collected using the Mandometer, a scale that continuously samples plate weight during eating. We evaluate the effectiveness for seven indicators, and for bite-instance detection. We compare our approach with three state-of-the-art algorithms, and achieve the lowest error rates for most indicators (24 g for total meal weight). The proposed algorithm extracts the parameters of the CFI curve automatically, eliminating the need for manual data processing, and thus facilitating large-scale studies of eating behavior.}
}

2017

(J)
Billy Langlet, Anna Anvret, Christos Maramis, Ioannis Moulos, Vasileios Papapanagiotou, Christos Diou, Eirini Lekka, Rachel Heimeier, Anastasios Delopoulos and Ioannis Ioakimidis
Behaviour & Information Technology, 36, (10), pp. 1005-1013, 2017 May
[Abstract][BibTex][pdf]

Studying eating behaviours is important in the fields of eating disorders and obesity. However, the current methodologies of quantifying eating behaviour in a real-life setting are lacking, either in reliability (e.g. self-reports) or in scalability. In this descriptive study, we deployed previously evaluated laboratory-based methodologies in a Swedish high school, using the Mandometer®, together with video cameras and a dedicated mobile app in order to record eating behaviours in a sample of 41 students, 16–17 years old. Without disturbing the normal school life, we achieved a 97% data-retention rate, using methods fully accepted by the target population. The overall eating style of the students was similar across genders, with male students eating more than females, during lunches of similar lengths. While both groups took similar number of bites, males took larger bites across the meal. Interestingly, the recorded school lunches were as long as lunches recorded in a laboratory setting, which is characterised by the absence of social interactions and direct access to additional food. In conclusion, a larger scale use of our methods is feasible, but more hypotheses-based studies are needed to fully describe and evaluate the interactions between the school environment and the recorded eating behaviours.

@article{Langlet2017,
author={Billy Langlet and Anna Anvret and Christos Maramis and Ioannis Moulos and Vasileios Papapanagiotou and Christos Diou and Eirini Lekka and Rachel Heimeier and Anastasios Delopoulos and Ioannis Ioakimidis},
title={Objective measures of eating behaviour in a Swedish high school},
journal={Behaviour & Information Technology},
volume={36},
number={10},
pages={1005-1013},
year={2017},
month={05},
date={2017-05-06},
url={https://doi.org/10.1080/0144929X.2017.1322146},
doi={http://10.1080/0144929X.2017.1322146},
abstract={Studying eating behaviours is important in the fields of eating disorders and obesity. However, the current methodologies of quantifying eating behaviour in a real-life setting are lacking, either in reliability (e.g. self-reports) or in scalability. In this descriptive study, we deployed previously evaluated laboratory-based methodologies in a Swedish high school, using the Mandometer®, together with video cameras and a dedicated mobile app in order to record eating behaviours in a sample of 41 students, 16–17 years old. Without disturbing the normal school life, we achieved a 97% data-retention rate, using methods fully accepted by the target population. The overall eating style of the students was similar across genders, with male students eating more than females, during lunches of similar lengths. While both groups took similar number of bites, males took larger bites across the meal. Interestingly, the recorded school lunches were as long as lunches recorded in a laboratory setting, which is characterised by the absence of social interactions and direct access to additional food. In conclusion, a larger scale use of our methods is feasible, but more hypotheses-based studies are needed to fully describe and evaluate the interactions between the school environment and the recorded eating behaviours.}
}

2017

(C)
Vasilis Papapanagiotou, Christos Diou, Lingjuan Zhou, Janet van den Boer, Monica Mars and Anastasios Delopoulos
2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 817-820, IEEE, 2017 Jul
[Abstract][BibTex][pdf]

Monitoring of eating behavior using wearable technology is receiving increased attention, driven by the recent advances in wearable devices and mobile phones. One particularly interesting aspect of eating behavior is the monitoring of chewing activity and eating occurrences. There are several chewing sensor types and chewing detection algorithms proposed in the bibliography, however no datasets are publicly available to facilitate evaluation and further research. In this paper, we present a multi-modal dataset of over 60 hours of recordings from 14 participants in semi-free living conditions, collected in the context of the SPLENDID project. The dataset includes raw signals from a photoplethysmography (PPG) sensor and a 3D accelerometer, and a set of extracted features from audio recordings; detailed annotations and ground truth are also provided both at eating event level and at individual chew level. We also provide a baseline evaluation method, and introduce the “challenge” of improving the baseline chewing detection algorithms. The dataset can be downloaded from http: //dx.doi.org/10.17026/dans-zxw-v8gy, and supplementary code can be downloaded from https://github. com/mug-auth/chewing-detection-challenge.git.

@inproceedings{8036949,
author={Vasilis Papapanagiotou and Christos Diou and Lingjuan Zhou and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={The SPLENDID chewing detection challenge},
booktitle={2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={817-820},
publisher={IEEE},
year={2017},
month={07},
date={2017-07-01},
url={http://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2017splendid.pdf},
doi={http://10.1109/EMBC.2017.8036949},
abstract={Monitoring of eating behavior using wearable technology is receiving increased attention, driven by the recent advances in wearable devices and mobile phones. One particularly interesting aspect of eating behavior is the monitoring of chewing activity and eating occurrences. There are several chewing sensor types and chewing detection algorithms proposed in the bibliography, however no datasets are publicly available to facilitate evaluation and further research. In this paper, we present a multi-modal dataset of over 60 hours of recordings from 14 participants in semi-free living conditions, collected in the context of the SPLENDID project. The dataset includes raw signals from a photoplethysmography (PPG) sensor and a 3D accelerometer, and a set of extracted features from audio recordings; detailed annotations and ground truth are also provided both at eating event level and at individual chew level. We also provide a baseline evaluation method, and introduce the “challenge” of improving the baseline chewing detection algorithms. The dataset can be downloaded from http: //dx.doi.org/10.17026/dans-zxw-v8gy, and supplementary code can be downloaded from https://github. com/mug-auth/chewing-detection-challenge.git.}
}

(C)
Vasilis Papapanagiotou, Christos Diou and Anastasios Delopoulos
2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1258-1261, 2017 Jul
[Abstract][BibTex][pdf]

Detecting chewing sounds from a microphone placed inside the outer ear for eating behaviour monitoring still remains a challenging task. This is mainly due the difficulty in discriminating non-chewing sounds (e.g. speech or sounds caused by walking) from chews, as well as due to to the high variability of the chewing sounds of different food types. Most approaches rely on detecting distictive structures on the sound wave, or on extracting a set of features and using a classifier to detect chews. In this work, we propose to use feature-learning in the time domain with 1-dimensional convolutional neural networks for for chewing detection. We apply a network of convolutional layers followed by fully connected layers directly on windows of the audio samples to detect chewing activity, and then aggregate individual chews to eating events. Experimental results on a large, semi-free living dataset collected in the context of the SPLENDID project indicate high effectiveness, with an accuracy of 0.980 and F1 score of 0.883.

@inproceedings{8037060,
author={Vasilis Papapanagiotou and Christos Diou and Anastasios Delopoulos},
title={Chewing detection from an in-ear microphone using convolutional neural networks},
booktitle={2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={1258-1261},
year={2017},
month={07},
date={2017-07-01},
url={http://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2017chewing.pdf},
doi={http://10.1109/EMBC.2017.8037060},
abstract={Detecting chewing sounds from a microphone placed inside the outer ear for eating behaviour monitoring still remains a challenging task. This is mainly due the difficulty in discriminating non-chewing sounds (e.g. speech or sounds caused by walking) from chews, as well as due to to the high variability of the chewing sounds of different food types. Most approaches rely on detecting distictive structures on the sound wave, or on extracting a set of features and using a classifier to detect chews. In this work, we propose to use feature-learning in the time domain with 1-dimensional convolutional neural networks for for chewing detection. We apply a network of convolutional layers followed by fully connected layers directly on windows of the audio samples to detect chewing activity, and then aggregate individual chews to eating events. Experimental results on a large, semi-free living dataset collected in the context of the SPLENDID project indicate high effectiveness, with an accuracy of 0.980 and F1 score of 0.883.}
}

(C)
Iason Karakostas, Vasileios Papapanagiotou and Anastasios Delopoulos
New Trends in Image Analysis and Processing -- ICIAP 2017, pp. 403-410, Springer International Publishing, Cham, 2017 Dec
[Abstract][BibTex]

Monitoring of eating activity is a well-established yet challenging problem. Various sensors have been proposed in the literature, including in-ear microphones, strain sensors, and photoplethysmography. Most of these approaches use detection algorithms that include machine learning; however, a universal, non user-specific model is usually trained from an available dataset for the final system. In this paper, we present a chewing detection system that can adapt to each user independently using active learning (AL) with minimal intrusiveness. The system captures audio from a commercial bone-conduction microphone connected to an Android smart-phone. We employ a state-of-the-art feature extraction algorithm and extend the Support Vector Machine (SVM) classification stage using AL. The effectiveness of the adaptable classification model can quickly converge to that achieved when using the entire available training set. We further use AL to create SVM models with a small number of support vectors, thus reducing the computational requirements, without significantly sacrificing effectiveness. To support our arguments, we have recorded a dataset from eight participants, each performing once or twice a standard protocol that includes consuming various types of food, as well as non-eating activities such as silent and noisy environments and conversation. Results show accuracy of 0.85 and F1 score of 0.83 in the best case for the user-specific models.

@inproceedings{Karakostas2017,
author={Iason Karakostas and Vasileios Papapanagiotou and Anastasios Delopoulos},
title={Building Parsimonious SVM Models for Chewing Detection and Adapting Them to the User},
booktitle={New Trends in Image Analysis and Processing -- ICIAP 2017},
pages={403-410},
publisher={Springer International Publishing},
editor={Sebastiano Battiato, Giovanni Maria Farinella, Marco Leo, Giovanni Gallo},
address={Cham},
year={2017},
month={12},
date={2017-12-31},
doi={http://10.1007/978-3-319-70742-6_38},
abstract={Monitoring of eating activity is a well-established yet challenging problem. Various sensors have been proposed in the literature, including in-ear microphones, strain sensors, and photoplethysmography. Most of these approaches use detection algorithms that include machine learning; however, a universal, non user-specific model is usually trained from an available dataset for the final system. In this paper, we present a chewing detection system that can adapt to each user independently using active learning (AL) with minimal intrusiveness. The system captures audio from a commercial bone-conduction microphone connected to an Android smart-phone. We employ a state-of-the-art feature extraction algorithm and extend the Support Vector Machine (SVM) classification stage using AL. The effectiveness of the adaptable classification model can quickly converge to that achieved when using the entire available training set. We further use AL to create SVM models with a small number of support vectors, thus reducing the computational requirements, without significantly sacrificing effectiveness. To support our arguments, we have recorded a dataset from eight participants, each performing once or twice a standard protocol that includes consuming various types of food, as well as non-eating activities such as silent and noisy environments and conversation. Results show accuracy of 0.85 and F1 score of 0.83 in the best case for the user-specific models.}
}

2016

(J)
Vasilis Papapanagiotou, Christos Diou, Lingchuan Zhou, Janet van den Boer, Monica Mars and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics, PP, (99), pp. 1-1, 2016 Jan
[Abstract][BibTex][pdf]

In the context of dietary management, accurate monitoring of eating habits is receiving increased attention. Wearable sensors, combined with the connectivity and processing of modern smart phones, can be used to robustly extract objective, and real-time measurements of human behaviour. In particular, for the task of chewing detection, several approaches based on an in-ear microphone can be found in the literature, while other types of sensors have also been reported, such as strain sensors. In this work, performed in the context of the SPLENDID project, we propose to combine an in-ear microphone with a photoplethysmography (PPG) sensor placed in the ear concha, in a new high accuracy and low sampling rate prototype chewing detection system. We propose a pipeline that initially processes each sensor signal separately, and then fuses both to perform the final detection. Features are extracted from each modality, and support vector machine (SVM) classifiers are used separately to perform snacking detection. Finally, we combine the SVM scores from both signals in a late-fusion scheme, which leads to increased eating detection accuracy. We evaluate the proposed eating monitoring system on a challenging, semi-free living dataset of 14 subjects, that includes more than 60 hours of audio and PPG signal recordings. Results show that fusing the audio and PPG signals significantly improves the effectiveness of eating event detection, achieving accuracy up to 0.938 and class-weighted accuracy up to 0.892.

@article{7736096,
author={Vasilis Papapanagiotou and Christos Diou and Lingchuan Zhou and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={A novel chewing detection system based on PPG, audio and accelerometry},
journal={IEEE Journal of Biomedical and Health Informatics},
volume={PP},
number={99},
pages={1-1},
year={2016},
month={01},
date={2016-01-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2017novel.pdf},
doi={http://10.1109/JBHI.2016.2625271},
keywords={Ear;Informatics;Microphones;Monitoring;Sensor systems;Signal processing algorithms},
abstract={In the context of dietary management, accurate monitoring of eating habits is receiving increased attention. Wearable sensors, combined with the connectivity and processing of modern smart phones, can be used to robustly extract objective, and real-time measurements of human behaviour. In particular, for the task of chewing detection, several approaches based on an in-ear microphone can be found in the literature, while other types of sensors have also been reported, such as strain sensors. In this work, performed in the context of the SPLENDID project, we propose to combine an in-ear microphone with a photoplethysmography (PPG) sensor placed in the ear concha, in a new high accuracy and low sampling rate prototype chewing detection system. We propose a pipeline that initially processes each sensor signal separately, and then fuses both to perform the final detection. Features are extracted from each modality, and support vector machine (SVM) classifiers are used separately to perform snacking detection. Finally, we combine the SVM scores from both signals in a late-fusion scheme, which leads to increased eating detection accuracy. We evaluate the proposed eating monitoring system on a challenging, semi-free living dataset of 14 subjects, that includes more than 60 hours of audio and PPG signal recordings. Results show that fusing the audio and PPG signals significantly improves the effectiveness of eating event detection, achieving accuracy up to 0.938 and class-weighted accuracy up to 0.892.}
}

(J)
Vasileios Papapanagiotou, Christos Diou and Anastasios Delopoulos
ACM Transactions on Multimedia Computing, Communications, and Applications, 12, (2), 2016 Mar
[Abstract][BibTex][pdf]

This article presents a novel approach to training classifiers for concept detection using tags and a variant of Support Vector Machine that enables the usage of training weights per sample. Combined with an appropriate tag weighting mechanism, more relevant samples play a more important role in the calibration of the final concept-detector model. We propose a complete, automated framework that (i) calculates relevance scores for each image-concept pair based on image tags, (ii) transforms the scores into relevance probabilities and automatically annotates each image according to this probability, (iii) transforms either the relevance scores or the probabilities into appropriate training weights and finally, (iv) incorporates the training weights and the visual features into a Fuzzy Support Vector Machine classifier to build the concept-detector model. The framework can be applied to online public collections, by gathering a large pool of diverse images, and using the calculated probability to select a training set and the associated training weights. To evaluate our argument, we experiment on two large annotated datasets. Experiments highlight the retrieval effectiveness of the proposed approach. Furthermore, experiments with various levels of annotation error show that using weights derived from tags significantly increases the robustness of the resulting concept detectors.

@article{Papapanagiotou2016Improving,
author={Vasileios Papapanagiotou and Christos Diou and Anastasios Delopoulos},
title={Improving Concept-Based Image Retrieval with Training Weights Computed from Tags},
journal={ACM Transactions on Multimedia Computing, Communications, and Applications},
volume={12},
number={2},
year={2016},
month={03},
date={2016-03-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2016improving.pdf},
doi={http://10.1145/2790230},
abstract={This article presents a novel approach to training classifiers for concept detection using tags and a variant of Support Vector Machine that enables the usage of training weights per sample. Combined with an appropriate tag weighting mechanism, more relevant samples play a more important role in the calibration of the final concept-detector model. We propose a complete, automated framework that (i) calculates relevance scores for each image-concept pair based on image tags, (ii) transforms the scores into relevance probabilities and automatically annotates each image according to this probability, (iii) transforms either the relevance scores or the probabilities into appropriate training weights and finally, (iv) incorporates the training weights and the visual features into a Fuzzy Support Vector Machine classifier to build the concept-detector model. The framework can be applied to online public collections, by gathering a large pool of diverse images, and using the calculated probability to select a training set and the associated training weights. To evaluate our argument, we experiment on two large annotated datasets. Experiments highlight the retrieval effectiveness of the proposed approach. Furthermore, experiments with various levels of annotation error show that using weights derived from tags significantly increases the robustness of the resulting concept detectors.}
}

2016

(C)
Vasilis Papapanagiotou, Christos Diou, Lingchuan Zhou, Janet van den Boer, Monica Mars and Anastasios Delopoulos
2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6485-6488, 2016 Aug
[Abstract][BibTex][pdf]

Monitoring of human eating behaviour has been attracting interest over the last few years, as a means to a healthy lifestyle, but also due to its association with serious health conditions, such as eating disorders and obesity. Use of self-reports and other non-automated means of monitoring have been found to be unreliable, compared to the use of wearable sensors. Various modalities have been reported, such as acoustic signal from ear-worn microphones, or signal from wearable strain sensors. In this work, we introduce a new sensor for the task of chewing detection, based on a novel photoplethysmography (PPG) sensor placed on the outer earlobe to perform the task. We also present a processing pipeline that includes two chewing detection algorithms from literature and one new algorithm, to process the captured PPG signal, and present their effectiveness. Experiments are performed on an annotated dataset recorded from 21 individuals, including more than 10 hours of eating and non-eating activities. Results show that the PPG sensor can be successfully used to support dietary monitoring.

@inproceedings{7592214,
author={Vasilis Papapanagiotou and Christos Diou and Lingchuan Zhou and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={A novel approach for chewing detection based on a wearable PPG sensor},
booktitle={2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={6485-6488},
year={2016},
month={08},
date={2016-08-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2016novel.pdf},
doi={http://10.1109/EMBC.2016.7592214},
keywords={Ear;Microphones;Monitoring;Medical disorders;Optical sensors;Patient monitoring;Photoplethysmography;PPG signal;Acoustic signal;Chewing detection;Dietary monitoring;Ear-worn microphone;Eating disorder;Health condition;Human eating behaviour monitoring;Noneating Activity;Obesity;Wearable PPG sensor;Wearable sensor;Wearable strain sensor;Light emitting diodes;Muscles;Pipelines;Prototypes},
abstract={Monitoring of human eating behaviour has been attracting interest over the last few years, as a means to a healthy lifestyle, but also due to its association with serious health conditions, such as eating disorders and obesity. Use of self-reports and other non-automated means of monitoring have been found to be unreliable, compared to the use of wearable sensors. Various modalities have been reported, such as acoustic signal from ear-worn microphones, or signal from wearable strain sensors. In this work, we introduce a new sensor for the task of chewing detection, based on a novel photoplethysmography (PPG) sensor placed on the outer earlobe to perform the task. We also present a processing pipeline that includes two chewing detection algorithms from literature and one new algorithm, to process the captured PPG signal, and present their effectiveness. Experiments are performed on an annotated dataset recorded from 21 individuals, including more than 10 hours of eating and non-eating activities. Results show that the PPG sensor can be successfully used to support dietary monitoring.}
}

2015

(C)
Vasileios Papapanagiotou, Christos Diou, Billy Langlet, Ioannis Ioakimidis and Anastasios Delopoulos
Bioinformatics and Biomedical Engineering: Third International Conference, IWBBIO 2015, Granada, Spain, April 15-17, 2015. Proceedings, Part II, pp. 35-46, Springer International Publishing, Cham, 2015 Jan
[Abstract][BibTex]

Recent studies and clinical practice have shown that the extraction of detailed eating behaviour indicators is critical in identifying risk factors and/or treating obesity and eating disorders, such as anorexia and bulimia nervosa. A number of single meal analysis methods that have been successfully applied are based on the Mandometer, a weight scale that continuously measures the weight of food on a plate over the course of a meal. Experimental meal analysis is performed using the cumulative food intake curve, which is produced by the semi-automatic processing of the Mandometer weight measurements, in tandem with the video recordings of the eating session. Due to its complexity and the video recording dependence, this process is not suited to a clinical or a real-life setting. In this work, we evaluate a method for automating the extraction of an accurate food intake curve, corrected for food additions during the meal and artificial weight fluctuations, using only the raw Mandometer output. Since the method requires no manual corrections or external video recordings it is appropriate for clinical or free-living use. Three algorithms are presented based on rules, greedy decisioning and exhaustive search, as well as evaluation methods of the Mandometer measurements. Experiments on a set of 114 meals collected from both normal and disordered eaters in a clinical environment illustrate the effectiveness of the proposed approach.

@inproceedings{Papapanagiotou2015Automated,
author={Vasileios Papapanagiotou and Christos Diou and Billy Langlet and Ioannis Ioakimidis and Anastasios Delopoulos},
title={Automated Extraction of Food Intake Indicators from Continuous Meal Weight Measurements},
booktitle={Bioinformatics and Biomedical Engineering: Third International Conference, IWBBIO 2015, Granada, Spain, April 15-17, 2015. Proceedings, Part II},
pages={35-46},
publisher={Springer International Publishing},
editor={Ortu{\~{no, Francisco and Rojas, Ignacio},
address={Cham},
year={2015},
month={01},
date={2015-01-01},
doi={http://10.1007/978-3-319-16480-9_4},
isbn={978-3-319-16480-9},
abstract={Recent studies and clinical practice have shown that the extraction of detailed eating behaviour indicators is critical in identifying risk factors and/or treating obesity and eating disorders, such as anorexia and bulimia nervosa. A number of single meal analysis methods that have been successfully applied are based on the Mandometer, a weight scale that continuously measures the weight of food on a plate over the course of a meal. Experimental meal analysis is performed using the cumulative food intake curve, which is produced by the semi-automatic processing of the Mandometer weight measurements, in tandem with the video recordings of the eating session. Due to its complexity and the video recording dependence, this process is not suited to a clinical or a real-life setting. In this work, we evaluate a method for automating the extraction of an accurate food intake curve, corrected for food additions during the meal and artificial weight fluctuations, using only the raw Mandometer output. Since the method requires no manual corrections or external video recordings it is appropriate for clinical or free-living use. Three algorithms are presented based on rules, greedy decisioning and exhaustive search, as well as evaluation methods of the Mandometer measurements. Experiments on a set of 114 meals collected from both normal and disordered eaters in a clinical environment illustrate the effectiveness of the proposed approach.}
}

(C)
Vasileios Papapanagiotou , Christos Diou, Zhou Lingchuan, Janet van den Boer, Monica Mars and Anastasios Delopoulos
New Trends in Image Analysis and Processing--ICIAP 2015 Workshops, pp. 401-408, 2015 Apr
[Abstract][BibTex][pdf]

In the battle against Obesity as well as Eating Disorders, non-intrusive dietary monitoring has been investigated by many researchers. For this purpose, one of the most promising modalities is the acoustic signal captured by a common microphone placed inside the outer ear canal. Various chewing detection algorithms for this type of signals exist in the literature. In this work, we perform a systematic analysis of the fractal nature of chewing sounds, and find that the Fractal Dimension is substantially different between chewing and talking. This holds even for severely down-sampled versions of the recordings. We derive chewing detectors based on the the fractal dimension of the recorded signals that can clearly discriminate chewing from non-chewing sounds. We experimentally evaluate snacking detection based on the proposed chewing detector, and we compare our approach against well known counterparts. Experimental results on a large dataset of 10 subjects and total recordings duration of more than 8 hours demonstrate the high effectiveness of our method. Furthermore, there exists indication that discrimination between different properties (such as crispness) is possible.

@inproceedings{Papapanagiotou2015Fractal,
author={Vasileios Papapanagiotou and Christos Diou and Zhou Lingchuan and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={Fractal Nature of Chewing Sounds},
booktitle={New Trends in Image Analysis and Processing--ICIAP 2015 Workshops},
pages={401-408},
year={2015},
month={04},
date={2015-04-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2015fractal.pdf},
doi={http://10.1007/978-3-319-23222-5_49},
abstract={In the battle against Obesity as well as Eating Disorders, non-intrusive dietary monitoring has been investigated by many researchers. For this purpose, one of the most promising modalities is the acoustic signal captured by a common microphone placed inside the outer ear canal. Various chewing detection algorithms for this type of signals exist in the literature. In this work, we perform a systematic analysis of the fractal nature of chewing sounds, and find that the Fractal Dimension is substantially different between chewing and talking. This holds even for severely down-sampled versions of the recordings. We derive chewing detectors based on the the fractal dimension of the recorded signals that can clearly discriminate chewing from non-chewing sounds. We experimentally evaluate snacking detection based on the proposed chewing detector, and we compare our approach against well known counterparts. Experimental results on a large dataset of 10 subjects and total recordings duration of more than 8 hours demonstrate the high effectiveness of our method. Furthermore, there exists indication that discrimination between different properties (such as crispness) is possible.}
}

(C)
Vasileios Papapanagiotou, Christos Diou, Billy Langlet, Ioannis Ioakimidis and Anastasios Delopoulos
2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 7853-7856, IEEE, 2015 Aug
[Abstract][BibTex][pdf]

Monitoring and modification of eating behaviour through continuous meal weight measurements has been successfully applied in clinical practice to treat obesity and eating disorders. For this purpose, the Mandometer, a plate scale, along with video recordings of subjects during the course of single meals, has been used to assist clinicians in measuring relevant food intake parameters. In this work, we present a novel algorithm for automatically constructing a subject\'s food intake curve using only the Mandometer weight measurements. This eliminates the need for direct clinical observation or video recordings, thus significantly reducing the manual effort required for analysis. The proposed algorithm aims at identifying specific meal related events (e.g. bites, food additions, artifacts), by applying an adaptive pre-processing stage using Delta coefficients, followed by event detection based on a parametric Probabilistic Context-Free Grammar on the derivative of the recorded sequence. Experimental results on a dataset of 114 meals from individuals suffering from obesity or eating disorders, as well as from individuals with normal BMI, demonstrate the effectiveness of the proposed approach.

@inproceedings{Papapanagiotou2015Parametric,
author={Vasileios Papapanagiotou and Christos Diou and Billy Langlet and Ioannis Ioakimidis and Anastasios Delopoulos},
title={A parametric Probabilistic Context-Free Grammar for food intake analysis based on continuous meal weight measurements},
booktitle={2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={7853-7856},
publisher={IEEE},
year={2015},
month={08},
date={2015-08-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2015parametric.pdf},
doi={http://10.1109/EMBC.2015.7320212},
abstract={Monitoring and modification of eating behaviour through continuous meal weight measurements has been successfully applied in clinical practice to treat obesity and eating disorders. For this purpose, the Mandometer, a plate scale, along with video recordings of subjects during the course of single meals, has been used to assist clinicians in measuring relevant food intake parameters. In this work, we present a novel algorithm for automatically constructing a subject\\\'s food intake curve using only the Mandometer weight measurements. This eliminates the need for direct clinical observation or video recordings, thus significantly reducing the manual effort required for analysis. The proposed algorithm aims at identifying specific meal related events (e.g. bites, food additions, artifacts), by applying an adaptive pre-processing stage using Delta coefficients, followed by event detection based on a parametric Probabilistic Context-Free Grammar on the derivative of the recorded sequence. Experimental results on a dataset of 114 meals from individuals suffering from obesity or eating disorders, as well as from individuals with normal BMI, demonstrate the effectiveness of the proposed approach.}
}