Publications



2020

(J)
Konstantinos Kyritsis, Christos Diou and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics, 2020 Apr
[Abstract][BibTex][pdf]

The increased worldwide prevalence of obesity has sparked the interest of the scientific community towards tools that objectively and automatically monitor eating behavior. Despite the study of obesity being in the spotlight, such tools can also be used to study eating disorders (e.g. anorexia nervosa) or provide a personalized monitoring platform for patients or athletes. This paper presents a complete framework towards the automated i) modeling of in-meal eating behavior and ii) temporal localization of meals, from raw inertial data collected in-the-wild using commercially available smartwatches. Initially, we present an end-to-end Neural Network which detects food intake events (i.e. bites). The proposed network uses both convolutional and recurrent layers that are trained simultaneously. Subsequently, we show how the distribution of the detected bites throughout the day can be used to estimate the start and end points of meals, using signal processing algorithms. We perform extensive evaluation on each framework part individually. Leave-one-subject-out (LOSO) evaluation shows that our bite detection approach outperforms four state-of-the-art algorithms towards the detection of bites during the course of a meal (0.923 F1 score). Furthermore, LOSO and held-out set experiments regarding the estimation of meal start/end points reveal that the proposed approach outperforms a relevant approach found in the literature (Jaccard Index of 0.820 and 0.821 for the LOSO and held-out experiments, respectively). Experiments are performed using our publicly available FIC and the newly introduced FreeFIC datasets.

@article{kyritsis2020data,
author={Konstantinos Kyritsis and Christos Diou and Anastasios Delopoulos},
title={A Data Driven End-to-end Approach for In-the-wild Monitoring of Eating Behavior Using Smartwatches},
journal={IEEE Journal of Biomedical and Health Informatics},
year={2020},
month={04},
date={2020-04-03},
url={http://mug.ee.auth.gr/wp-content/uploads/kokirits2020data.pdf},
doi={http://10.1109/JBHI.2020.2984907},
keywords={Wearable sensors;biomedical signal processing},
abstract={The increased worldwide prevalence of obesity has sparked the interest of the scientific community towards tools that objectively and automatically monitor eating behavior. Despite the study of obesity being in the spotlight, such tools can also be used to study eating disorders (e.g. anorexia nervosa) or provide a personalized monitoring platform for patients or athletes. This paper presents a complete framework towards the automated i) modeling of in-meal eating behavior and ii) temporal localization of meals, from raw inertial data collected in-the-wild using commercially available smartwatches. Initially, we present an end-to-end Neural Network which detects food intake events (i.e. bites). The proposed network uses both convolutional and recurrent layers that are trained simultaneously. Subsequently, we show how the distribution of the detected bites throughout the day can be used to estimate the start and end points of meals, using signal processing algorithms. We perform extensive evaluation on each framework part individually. Leave-one-subject-out (LOSO) evaluation shows that our bite detection approach outperforms four state-of-the-art algorithms towards the detection of bites during the course of a meal (0.923 F1 score). Furthermore, LOSO and held-out set experiments regarding the estimation of meal start/end points reveal that the proposed approach outperforms a relevant approach found in the literature (Jaccard Index of 0.820 and 0.821 for the LOSO and held-out experiments, respectively). Experiments are performed using our publicly available FIC and the newly introduced FreeFIC datasets.}
}

2020

(C)
Christos Diou, Ioannis Sarafis, Vasileios Papapanagiotou, Leonidas Alagialoglou, Irini Lekka, Dimitrios Filos, Leandros Stefanopoulos, Vasileios Kilintzis, Christos Maramis, Youla Karavidopoulou, Nikos Maglaveras, Ioannis Ioakimidis, Evangelia Charmandari, Penio Kassari, Athanasia Tragomalou, Monica Mars, Thien-An Ngoc Nguyen, Tahar Kechadi, Shane O' Donnell, Gerardine Doyle, Sarah Browne, Grace O' Malley, Rachel Heimeier, Katerina Riviou, Evangelia Koukoula, Konstantinos Filis, Maria Hassapidou, Ioannis Pagkalos, Daniel Ferri, Isabel Pérez and Anastasios Delopoulos
"BigO: A public health decision support system for measuring obesogenic behaviors of children in relation to their local environment"
42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, 2020 May
[Abstract][BibTex][pdf]

Obesity is a complex disease and its prevalence depends on multiple factors related to the local socioeconomic, cultural and urban context of individuals. Many obesity prevention strategies and policies, however, are horizontal measures that do not depend on context-specific evidence. In this paper we present an overview of BigO, a system designed to collect objective behavioral data from children and adolescent populations as well as their environment in order to support public health authorities in formulating effective, context-specific policies and interventions addressing childhood obesity. We present an overview of the data acquisition, indicator extraction, data exploration and analysis components of the BigO system, as well as an account of its preliminary pilot application in 33 schools and 2 clinics in four European countries, involving over 4,200 participants.

@inproceedings{diou2020bigo,
author={Christos Diou and Ioannis Sarafis and Vasileios Papapanagiotou and Leonidas Alagialoglou and Irini Lekka and Dimitrios Filos and Leandros Stefanopoulos and Vasileios Kilintzis and Christos Maramis and Youla Karavidopoulou and Nikos Maglaveras and Ioannis Ioakimidis and Evangelia Charmandari and Penio Kassari and Athanasia Tragomalou and Monica Mars and Thien-An Ngoc Nguyen and Tahar Kechadi and Shane O' Donnell and Gerardine Doyle and Sarah Browne and Grace O' Malley and Rachel Heimeier and Katerina Riviou and Evangelia Koukoula and Konstantinos Filis and Maria Hassapidou and Ioannis Pagkalos and Daniel Ferri and Isabel Pérez and Anastasios Delopoulos},
title={BigO: A public health decision support system for measuring obesogenic behaviors of children in relation to their local environment},
booktitle={42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
publisher={IEEE},
year={2020},
month={05},
date={2020-05-06},
url={https://arxiv.org/pdf/2005.02928.pdf},
abstract={Obesity is a complex disease and its prevalence depends on multiple factors related to the local socioeconomic, cultural and urban context of individuals. Many obesity prevention strategies and policies, however, are horizontal measures that do not depend on context-specific evidence. In this paper we present an overview of BigO, a system designed to collect objective behavioral data from children and adolescent populations as well as their environment in order to support public health authorities in formulating effective, context-specific policies and interventions addressing childhood obesity. We present an overview of the data acquisition, indicator extraction, data exploration and analysis components of the BigO system, as well as an account of its preliminary pilot application in 33 schools and 2 clinics in four European countries, involving over 4,200 participants.}
}

(C)
Vasileios Papapanagiotou, Ioannis Sarafis, Christos Diou, Ioannis Ioakimidis, Evangelia Charmandari and Anastasios Delopoulos
"Collecting big behavioral data for measuring behavior against obesity"
42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2020 May
[Abstract][BibTex][pdf]

Obesity is currently affecting very large portions of the global population. Effective prevention and treatment starts at the early age and requires objective knowledge of population-level behavior on the region/neighborhood scale. To this end, we present a system for extracting and collecting behavioral information on the individual-level objectively and automatically. The behavioral information is related to physical activity, types of visited places, and transportation mode used between them. The system employs indicator-extraction algorithms from the literature which we evaluate on publicly available datasets. The system has been developed and integrated in the context of the EU-funded BigO project that aims at preventing obesity in young populations.

@inproceedings{papapanagiotou2020collecting,
author={Vasileios Papapanagiotou and Ioannis Sarafis and Christos Diou and Ioannis Ioakimidis and Evangelia Charmandari and Anastasios Delopoulos},
title={Collecting big behavioral data for measuring behavior against obesity},
booktitle={42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
year={2020},
month={05},
date={2020-05-11},
url={https://arxiv.org/pdf/2005.04928.pdf},
abstract={Obesity is currently affecting very large portions of the global population. Effective prevention and treatment starts at the early age and requires objective knowledge of population-level behavior on the region/neighborhood scale. To this end, we present a system for extracting and collecting behavioral information on the individual-level objectively and automatically. The behavioral information is related to physical activity, types of visited places, and transportation mode used between them. The system employs indicator-extraction algorithms from the literature which we evaluate on publicly available datasets. The system has been developed and integrated in the context of the EU-funded BigO project that aims at preventing obesity in young populations.}
}

(C)
Ioannis Sarafis, Christos Diou, Vasileios Papapanagiotou, Leonidas Alagialoglou and Anastasios Delopoulos
"Inferring the Spatial Distribution of Physical Activity in Children Population from Characteristics of the Environment"
42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, 2020 May
[Abstract][BibTex][pdf]

Obesity affects a rising percentage of the children and adolescent population, contributing to decreased quality of life and increased risk for comorbidities. Although the major causes of obesity are known, the obesogenic behaviors manifest as a result of complex interactions of the individual with the living environment. For this reason, addressing childhood obesity remains a challenging problem for public health authorities. The BigO project relies on large-scale behavioral and environmental data collection to create tools that support policy making and intervention design. In this work, we propose a novel analysis approach for modeling the expected population behavior as a function of the local environment. We experimentally evaluate this approach in predicting the expected physical activity level in small geographic regions using urban environment characteristics. Experiments on data collected from 156 children and adolescents verify the potential of the proposed approach. Specifically, we train models that predict the physical activity level in a region, achieving 81% leave-one-out accuracy. In addition, we exploit the model predictions to automatically visualize heatmaps of the expected population behavior in areas of interest, from which we draw useful insights. Overall, the predictive models and the automatic heatmaps are promising tools in gaining direct perception for the spatial distribution of the population's behavior, with potential uses by public health authorities.

@conference{sarafis2020inferring,
author={Ioannis Sarafis and Christos Diou and Vasileios Papapanagiotou and Leonidas Alagialoglou and Anastasios Delopoulos},
title={Inferring the Spatial Distribution of Physical Activity in Children Population from Characteristics of the Environment},
booktitle={42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
publisher={IEEE},
year={2020},
month={05},
date={2020-05-08},
url={https://arxiv.org/pdf/2005.03957.pdf},
abstract={Obesity affects a rising percentage of the children and adolescent population, contributing to decreased quality of life and increased risk for comorbidities. Although the major causes of obesity are known, the obesogenic behaviors manifest as a result of complex interactions of the individual with the living environment. For this reason, addressing childhood obesity remains a challenging problem for public health authorities. The BigO project relies on large-scale behavioral and environmental data collection to create tools that support policy making and intervention design. In this work, we propose a novel analysis approach for modeling the expected population behavior as a function of the local environment. We experimentally evaluate this approach in predicting the expected physical activity level in small geographic regions using urban environment characteristics. Experiments on data collected from 156 children and adolescents verify the potential of the proposed approach. Specifically, we train models that predict the physical activity level in a region, achieving 81% leave-one-out accuracy. In addition, we exploit the model predictions to automatically visualize heatmaps of the expected population behavior in areas of interest, from which we draw useful insights. Overall, the predictive models and the automatic heatmaps are promising tools in gaining direct perception for the spatial distribution of the population\'s behavior, with potential uses by public health authorities.}
}

2019

(J)
Christos Diou, Ioannis Sarafis, Vasileios Papapanagiotou, Ioannis Ioakimidis and Anastasios Delopoulos
Statistical Journal of the IAOS, 35, (4), pp. 677-690, 2019 Dec
[Abstract][BibTex][pdf]

The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data. BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data.

@article{DiouIAOS2019,
author={Christos Diou and Ioannis Sarafis and Vasileios Papapanagiotou and Ioannis Ioakimidis and Anastasios Delopoulos},
title={A methodology for obtaining objective measurements of population obesogenic behaviors in relation to the environment},
journal={Statistical Journal of the IAOS},
volume={35},
number={4},
pages={677-690},
year={2019},
month={12},
date={2019-12-10},
url={https://arxiv.org/pdf/1911.08315.pdf},
doi={http://10.3233/SJI-190537},
abstract={The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data. BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data.}
}

(J)
Konstantinos Kyritsis, Christos Diou and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics (JBHI), 2019 Jan
[Abstract][BibTex][pdf]

Overweight and obesity are both associated with in-meal eating parameters such as eating speed. Recently, the plethora of available wearable devices in the market ignited the interest of both the scientific community and the industry towards unobtrusive solutions for eating behavior monitoring. In this paper we present an algorithm for automatically detecting the in-meal food intake cycles using the inertial signals (acceleration and orientation velocity) from an off-the-shelf smartwatch. We use 5 specific wrist micromovements to model the series of actions leading to and following an intake event (i.e. bite). Food intake detection is performed in two steps. In the first step we process windows of raw sensor streams and estimate their micromovement probability distributions by means of a Convolutional Neural Network (CNN). In the second step we use a Long-Short Term Memory (LSTM) network to capture the temporal evolution and classify sequences of windows as food intake cycles. Evaluation is performed using a challenging dataset of 21 meals from 12 subjects. In our experiments we compare the performance of our algorithm against three state-of-the-art approaches, where our approach achieves the highest F1 detection score (0.913 in the Leave-One-Subject-Out experiment). The dataset used in the experiments is available at https://mug.ee.auth.gr/intake-cycle-detection/.

@article{kyritsis2019modeling,
author={Konstantinos Kyritsis and Christos Diou and Anastasios Delopoulos},
title={Modeling Wrist Micromovements to Measure In-Meal Eating Behavior from Inertial Sensor Data},
journal={IEEE Journal of Biomedical and Health Informatics (JBHI)},
year={2019},
month={01},
date={2019-01-09},
url={http://mug.ee.auth.gr/wp-content/uploads/kyritsis2019modeling.pdf},
doi={http://10.1109/JBHI.2019.2892011},
abstract={Overweight and obesity are both associated with in-meal eating parameters such as eating speed. Recently, the plethora of available wearable devices in the market ignited the interest of both the scientific community and the industry towards unobtrusive solutions for eating behavior monitoring. In this paper we present an algorithm for automatically detecting the in-meal food intake cycles using the inertial signals (acceleration and orientation velocity) from an off-the-shelf smartwatch. We use 5 specific wrist micromovements to model the series of actions leading to and following an intake event (i.e. bite). Food intake detection is performed in two steps. In the first step we process windows of raw sensor streams and estimate their micromovement probability distributions by means of a Convolutional Neural Network (CNN). In the second step we use a Long-Short Term Memory (LSTM) network to capture the temporal evolution and classify sequences of windows as food intake cycles. Evaluation is performed using a challenging dataset of 21 meals from 12 subjects. In our experiments we compare the performance of our algorithm against three state-of-the-art approaches, where our approach achieves the highest F1 detection score (0.913 in the Leave-One-Subject-Out experiment). The dataset used in the experiments is available at https://mug.ee.auth.gr/intake-cycle-detection/.}
}

(J)
Langlet, Billy, Fagerberg, Petter, Delopoulos, Anastasios, Papapanagiotou, Vasileios, Diou, Christos, Maramis, Christos, Maglaveras, Nikolaos, Anvret, Anna, Ioakimidis and Ioannis
Nutrients, 11, (3), pp. 672, 2019 Mar
[Abstract][BibTex][pdf]

Large portion sizes and a high eating rate are associated with high energy intake and obesity. Most individuals maintain their food intake weight (g) and eating rate (g/min) rank in relation to their peers, despite food and environmental manipulations. Single meal measures may enable identification of “large portion eaters” and “fast eaters,” finding individuals at risk of developing obesity. The aim of this study was to predict real-life food intake weight and eating rate based on one school lunch. Twenty-four high-school students with a mean (±SD) age of 16.8 yr (±0.7) and body mass index of 21.9 (±4.1) were recruited, using no exclusion criteria. Food intake weight and eating rate was first self-rated (“Less,” “Average” or “More than peers”), then objectively recorded during one school lunch (absolute weight of consumed food in grams). Afterwards, subjects recorded as many main meals (breakfasts, lunches and dinners) as possible in real-life for a period of at least two weeks, using a Bluetooth connected weight scale and a smartphone application. On average participants recorded 18.9 (7.3) meals during the study. Real-life food intake weight was 327.4 g (±110.6), which was significantly lower (p = 0.027) than the single school lunch, at 367.4 g (±167.2). When the intra-class correlation of food weight intake between the objectively recorded real-life and school lunch meals was compared, the correlation was excellent (R = 0.91). Real-life eating rate was 33.5 g/min (±14.8), which was significantly higher (p = 0.010) than the single school lunch, at 27.7 g/min (±13.3). The intra-class correlation of the recorded eating rate between real-life and school lunch meals was very large (R = 0.74). The participants’ recorded food intake weights and eating rates were divided into terciles and compared between school lunches and real-life, with moderate or higher agreement (? = 0.75 and ? = 0.54, respectively). In contrast, almost no agreement was observed between self-rated and real-life recorded rankings of food intake weight and eating rate (? = 0.09 and ? = 0.08, respectively). The current study provides evidence that both food intake weight and eating rates per meal vary considerably in real-life per individual. However, based on these behaviours, most students can be correctly classified in regard to their peers based on single school lunches. In contrast, self-reported food intake weight and eating rate are poor predictors of real-life measures. Finally, based on the recorded individual variability of real-life food intake weight and eating rate, it is not advised to rank individuals based on single recordings collected in real-life settings

@article{Langlet2019Predicting,
author={Langlet and Billy and Fagerberg and Petter and Delopoulos and Anastasios and Papapanagiotou and Vasileios and Diou and Christos and Maramis and Christos and Maglaveras and Nikolaos and Anvret and Anna and Ioakimidis and Ioannis},
title={Predicting Real-Life Eating Behaviours Using Single School Lunches in Adolescents},
journal={Nutrients},
volume={11},
number={3},
pages={672},
year={2019},
month={03},
date={2019-03-20},
url={https://www.mdpi.com/2072-6643/11/3/672/pdf},
doi={https://doi.org/10.3390/nu11030672},
abstract={Large portion sizes and a high eating rate are associated with high energy intake and obesity. Most individuals maintain their food intake weight (g) and eating rate (g/min) rank in relation to their peers, despite food and environmental manipulations. Single meal measures may enable identification of “large portion eaters” and “fast eaters,” finding individuals at risk of developing obesity. The aim of this study was to predict real-life food intake weight and eating rate based on one school lunch. Twenty-four high-school students with a mean (±SD) age of 16.8 yr (±0.7) and body mass index of 21.9 (±4.1) were recruited, using no exclusion criteria. Food intake weight and eating rate was first self-rated (“Less,” “Average” or “More than peers”), then objectively recorded during one school lunch (absolute weight of consumed food in grams). Afterwards, subjects recorded as many main meals (breakfasts, lunches and dinners) as possible in real-life for a period of at least two weeks, using a Bluetooth connected weight scale and a smartphone application. On average participants recorded 18.9 (7.3) meals during the study. Real-life food intake weight was 327.4 g (±110.6), which was significantly lower (p = 0.027) than the single school lunch, at 367.4 g (±167.2). When the intra-class correlation of food weight intake between the objectively recorded real-life and school lunch meals was compared, the correlation was excellent (R = 0.91). Real-life eating rate was 33.5 g/min (±14.8), which was significantly higher (p = 0.010) than the single school lunch, at 27.7 g/min (±13.3). The intra-class correlation of the recorded eating rate between real-life and school lunch meals was very large (R = 0.74). The participants’ recorded food intake weights and eating rates were divided into terciles and compared between school lunches and real-life, with moderate or higher agreement (? = 0.75 and ? = 0.54, respectively). In contrast, almost no agreement was observed between self-rated and real-life recorded rankings of food intake weight and eating rate (? = 0.09 and ? = 0.08, respectively). The current study provides evidence that both food intake weight and eating rates per meal vary considerably in real-life per individual. However, based on these behaviours, most students can be correctly classified in regard to their peers based on single school lunches. In contrast, self-reported food intake weight and eating rate are poor predictors of real-life measures. Finally, based on the recorded individual variability of real-life food intake weight and eating rate, it is not advised to rank individuals based on single recordings collected in real-life settings}
}

2019

(C)
Konstantinos Kyritsis, Christos Diou and Anastasios Delopoulos
41th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, Berlin, Germany, 2019 Jul
[Abstract][BibTex][pdf]

Automated and objective monitoring of eating behavior has received the attention of both the research community and the industry over the past few years. In this paper we present a method for automatically detecting meals in free living conditions, using the inertial data (acceleration and orientation velocity) from commercially available smartwatches. The proposed method operates in two steps. In the first step we process the raw inertial signals using an End-to- End Neural Network with the purpose of detecting the bite events throughout the recording. During the next step, we process the resulting bite detections using signal processing algorithms to obtain the final meal start and end timestamp estimates. Evaluation results obtained from our Leave One Subject Out experiments using our publicly available FIC and FreeFIC datasets, exhibit encouraging results by achieving an F1/Average Jaccard Index of 0.894/0.804.

@conference{kyritsis2019detecting,
author={Konstantinos Kyritsis and Christos Diou and Anastasios Delopoulos},
title={Detecting Meals In the Wild Using the Inertial Data of a Typical Smartwatch},
booktitle={41th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
publisher={IEEE},
address={Berlin, Germany},
year={2019},
month={07},
date={2019-07-01},
url={http://mug.ee.auth.gr/wp-content/uploads/kyritsis2019detecting.pdf},
abstract={Automated and objective monitoring of eating behavior has received the attention of both the research community and the industry over the past few years. In this paper we present a method for automatically detecting meals in free living conditions, using the inertial data (acceleration and orientation velocity) from commercially available smartwatches. The proposed method operates in two steps. In the first step we process the raw inertial signals using an End-to- End Neural Network with the purpose of detecting the bite events throughout the recording. During the next step, we process the resulting bite detections using signal processing algorithms to obtain the final meal start and end timestamp estimates. Evaluation results obtained from our Leave One Subject Out experiments using our publicly available FIC and FreeFIC datasets, exhibit encouraging results by achieving an F1/Average Jaccard Index of 0.894/0.804.}
}

(C)
Ioannis Sarafis, Christos Diou, Ioannis Ioakimidis and Anastasios Delopoulos
41th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019 Jul
[Abstract][BibTex][pdf]

Certain patterns of eating behaviour during meal have been identified as risk factors for long-term abnormal eating development in healthy individuals and, eventually, can affect the body weight. To detect early signs of problematic eating behaviour, this paper proposes a novel method for building behaviour assessment models. The goal of the models is to predict whether the in-meal eating behaviour resembles patterns associated with obesity, eating disorders, or low-risk behaviours. The models are trained using meals recorded with a plate scale from a reference population and labels annotated by a domain expert. In addition, the domain expert assigned scores that characterise the degree of any exhibited abnormal patterns. To improve model effectiveness, we use the domain expert’s scores to create training error regularisation weights that alter the importance of each training instance for its class during model training. The behaviour assessment models are based on the SVM algorithm and the fuzzy SVM algorithm for their instance-weighted variation. Experiments conducted on meals recorded from 120 individuals show that: (a) the proposed approach can produce effective models for eating behaviour classification (for individuals), or for ranking (for populations); and (b) the instance-weighted fuzzy SVM models achieve significant performance improvements, compared to the non-weighted, standard SVM models.

@conference{sarafis2019assessment,
author={Ioannis Sarafis and Christos Diou and Ioannis Ioakimidis and Anastasios Delopoulos},
title={Assessment of In-Meal Eating Behaviour using Fuzzy SVM},
booktitle={41th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
year={2019},
month={07},
date={2019-07-27},
url={https://mug.ee.auth.gr/wp-content/uploads/sarafis2019assessment.pdf},
doi={https://doi.org/10.1109/EMBC.2019.8857606},
abstract={Certain patterns of eating behaviour during meal have been identified as risk factors for long-term abnormal eating development in healthy individuals and, eventually, can affect the body weight. To detect early signs of problematic eating behaviour, this paper proposes a novel method for building behaviour assessment models. The goal of the models is to predict whether the in-meal eating behaviour resembles patterns associated with obesity, eating disorders, or low-risk behaviours. The models are trained using meals recorded with a plate scale from a reference population and labels annotated by a domain expert. In addition, the domain expert assigned scores that characterise the degree of any exhibited abnormal patterns. To improve model effectiveness, we use the domain expert’s scores to create training error regularisation weights that alter the importance of each training instance for its class during model training. The behaviour assessment models are based on the SVM algorithm and the fuzzy SVM algorithm for their instance-weighted variation. Experiments conducted on meals recorded from 120 individuals show that: (a) the proposed approach can produce effective models for eating behaviour classification (for individuals), or for ranking (for populations); and (b) the instance-weighted fuzzy SVM models achieve significant performance improvements, compared to the non-weighted, standard SVM models.}
}

(C)
Ioannis Sarafis, Christos Diou and Anastasios Delopoulos
41th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019 Jul
[Abstract][BibTex][pdf]

Obesity is a preventable disease that affects the health of a significant population percentage, reduces the life expectancy and encumbers the health care systems. The obesity epidemic is not caused by isolated factors, but it is the result of multiple behavioural patterns and complex interactions with the living environment. Therefore, in-depth understanding of the population behaviour is essential in order to create successful policies against obesity prevalence. To this end, the BigO system facilitates the collection, processing and modelling of behavioural data at population level to provide evidence for effective policy and interventions design. In this paper, we introduce the behaviour profiles mechanism of BigO that produces comprehensive models for the behavioural patterns of individuals, while maintaining high levels of privacy protection. We give examples for the proposed mechanism from real world data and we discuss usages for supporting various types of evidence-based policy design.

@conference{sarafis2019behaviour,
author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},
title={Behaviour Profiles for Evidence-based Policies Against Obesity},
booktitle={41th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
year={2019},
month={07},
date={2019-07-26},
url={https://mug.ee.auth.gr/wp-content/uploads/sarafis2019behaviour.pdf},
doi={https://doi.org/10.1109/EMBC.2019.8857161},
abstract={Obesity is a preventable disease that affects the health of a significant population percentage, reduces the life expectancy and encumbers the health care systems. The obesity epidemic is not caused by isolated factors, but it is the result of multiple behavioural patterns and complex interactions with the living environment. Therefore, in-depth understanding of the population behaviour is essential in order to create successful policies against obesity prevalence. To this end, the BigO system facilitates the collection, processing and modelling of behavioural data at population level to provide evidence for effective policy and interventions design. In this paper, we introduce the behaviour profiles mechanism of BigO that produces comprehensive models for the behavioural patterns of individuals, while maintaining high levels of privacy protection. We give examples for the proposed mechanism from real world data and we discuss usages for supporting various types of evidence-based policy design.}
}

2018

(J)
Janet van den Boer, Annemiek van der Lee, Lingchuan Zhou, Vasileios Papapanagiotou, Christos Diou, Anastasios Delopoulos and Monica Mars
The SPLENDID Eating Detection Sensor: Development and Feasibility Study, 6, (9), pp. 170, 2018 Sep
[Abstract][BibTex]

The available methods for monitoring food intake---which for a great part rely on self-report---often provide biased and incomplete data. Currently, no good technological solutions are available. Hence, the SPLENDID eating detection sensor (an ear-worn device with an air microphone and a photoplethysmogram [PPG] sensor) was developed to enable complete and objective measurements of eating events. The technical performance of this device has been described before. To date, literature is lacking a description of how such a device is perceived and experienced by potential users. Objective: The objective of our study was to explore how potential users perceive and experience the SPLENDID eating detection sensor. Methods: Potential users evaluated the eating detection sensor at different stages of its development: (1) At the start, 12 health professionals (eg, dieticians, personal trainers) were interviewed and a focus group was held with 5 potential end users to find out their thoughts on the concept of the eating detection sensor. (2) Then, preliminary prototypes of the eating detection sensor were tested in a laboratory setting where 23 young adults reported their experiences. (3) Next, the first wearable version of the eating detection sensor was tested in a semicontrolled study where 22 young, overweight adults used the sensor on 2 separate days (from lunch till dinner) and reported their experiences. (4) The final version of the sensor was tested in a 4-week feasibility study by 20 young, overweight adults who reported their experiences. Results: Throughout all the development stages, most individuals were enthusiastic about the eating detection sensor. However, it was stressed multiple times that it was critical that the device be discreet and comfortable to wear for a longer period. In the final study, the eating detection sensor received an average grade of 3.7 for wearer comfort on a scale of 1 to 10. Moreover, experienced discomfort was the main reason for wearing the eating detection sensor <2 hours a day. The participants reported having used the eating detection sensor on 19/28 instructed days on average. Conclusions: The SPLENDID eating detection sensor, which uses an air microphone and a PPG sensor, is a promising new device that can facilitate the collection of reliable food intake data, as shown by its technical potential. Potential users are enthusiastic, but to be successful wearer comfort and discreetness of the device need to be improved.

@article{2018Boer,
author={Janet van den Boer and Annemiek van der Lee and Lingchuan Zhou and Vasileios Papapanagiotou and Christos Diou and Anastasios Delopoulos and Monica Mars},
title={The SPLENDID Eating Detection Sensor: Development and Feasibility Study},
journal={The SPLENDID Eating Detection Sensor: Development and Feasibility Study},
volume={6},
number={9},
pages={170},
year={2018},
month={09},
date={2018-09-04},
doi={https://doi.org/10.2196/mhealth.9781},
issn={2291-5222},
abstract={The available methods for monitoring food intake---which for a great part rely on self-report---often provide biased and incomplete data. Currently, no good technological solutions are available. Hence, the SPLENDID eating detection sensor (an ear-worn device with an air microphone and a photoplethysmogram [PPG] sensor) was developed to enable complete and objective measurements of eating events. The technical performance of this device has been described before. To date, literature is lacking a description of how such a device is perceived and experienced by potential users. Objective: The objective of our study was to explore how potential users perceive and experience the SPLENDID eating detection sensor. Methods: Potential users evaluated the eating detection sensor at different stages of its development: (1) At the start, 12 health professionals (eg, dieticians, personal trainers) were interviewed and a focus group was held with 5 potential end users to find out their thoughts on the concept of the eating detection sensor. (2) Then, preliminary prototypes of the eating detection sensor were tested in a laboratory setting where 23 young adults reported their experiences. (3) Next, the first wearable version of the eating detection sensor was tested in a semicontrolled study where 22 young, overweight adults used the sensor on 2 separate days (from lunch till dinner) and reported their experiences. (4) The final version of the sensor was tested in a 4-week feasibility study by 20 young, overweight adults who reported their experiences. Results: Throughout all the development stages, most individuals were enthusiastic about the eating detection sensor. However, it was stressed multiple times that it was critical that the device be discreet and comfortable to wear for a longer period. In the final study, the eating detection sensor received an average grade of 3.7 for wearer comfort on a scale of 1 to 10. Moreover, experienced discomfort was the main reason for wearing the eating detection sensor <2 hours a day. The participants reported having used the eating detection sensor on 19/28 instructed days on average. Conclusions: The SPLENDID eating detection sensor, which uses an air microphone and a PPG sensor, is a promising new device that can facilitate the collection of reliable food intake data, as shown by its technical potential. Potential users are enthusiastic, but to be successful wearer comfort and discreetness of the device need to be improved.}
}

(J)
Christos Diou, Pantelis Lelekas and Anastasios Delopoulos
Journal of Imaging, 4, (11), pp. 125, 2018 Oct
[Abstract][BibTex]

Background: Evidence-based policymaking requires data about the local population’s socioeconomic status (SES) at detailed geographical level, however, such information is often not available, or is too expensive to acquire. Researchers have proposed solutions to estimate SES indicators by analyzing Google Street View images, however, these methods are also resource-intensive, since they require large volumes of manually labeled training data. (2) Methods: We propose a methodology for automatically computing surrogate variables of SES indicators using street images of parked cars and deep multiple instance learning. Our approach does not require any manually created labels, apart from data already available by statistical authorities, while the entire pipeline for image acquisition, parked car detection, car classification, and surrogate variable computation is fully automated. The proposed surrogate variables are then used in linear regression models to estimate the target SES indicators. (3) Results: We implement and evaluate a model based on the proposed surrogate variable at 30 municipalities of varying SES in Greece. Our model has R2=0.76 and a correlation coefficient of 0.874 with the true unemployment rate, while it achieves a mean absolute percentage error of 0.089 and mean absolute error of 1.87 on a held-out test set. Similar results are also obtained for other socioeconomic indicators, related to education level and occupational prestige. (4) Conclusions: The proposed methodology can be used to estimate SES indicators at the local level automatically, using images of parked cars detected via Google Street View, without the need for any manual labeling effort

@article{Diou2018JI,
author={Christos Diou and Pantelis Lelekas and Anastasios Delopoulos},
title={Image-Based Surrogates of Socio-Economic Status in Urban Neighborhoods Using Deep Multiple Instance Learning},
journal={Journal of Imaging},
volume={4},
number={11},
pages={125},
year={2018},
month={10},
date={2018-10-23},
doi={http://10.3390/jimaging4110125},
issn={2313-433X},
abstract={Background: Evidence-based policymaking requires data about the local population’s socioeconomic status (SES) at detailed geographical level, however, such information is often not available, or is too expensive to acquire. Researchers have proposed solutions to estimate SES indicators by analyzing Google Street View images, however, these methods are also resource-intensive, since they require large volumes of manually labeled training data. (2) Methods: We propose a methodology for automatically computing surrogate variables of SES indicators using street images of parked cars and deep multiple instance learning. Our approach does not require any manually created labels, apart from data already available by statistical authorities, while the entire pipeline for image acquisition, parked car detection, car classification, and surrogate variable computation is fully automated. The proposed surrogate variables are then used in linear regression models to estimate the target SES indicators. (3) Results: We implement and evaluate a model based on the proposed surrogate variable at 30 municipalities of varying SES in Greece. Our model has R2=0.76 and a correlation coefficient of 0.874 with the true unemployment rate, while it achieves a mean absolute percentage error of 0.089 and mean absolute error of 1.87 on a held-out test set. Similar results are also obtained for other socioeconomic indicators, related to education level and occupational prestige. (4) Conclusions: The proposed methodology can be used to estimate SES indicators at the local level automatically, using images of parked cars detected via Google Street View, without the need for any manual labeling effort}
}

(J)
Maryam Esfandiari, Vasilis Papapanagiotou, Christos Diou, Modjtaba Zandian, Jenny Nolstam, Per Södersten and Cecilia Bergh
JoVE, (135), 2018 May
[Abstract][BibTex]

Subjects eat food from a plate that sits on a scale connected to a computer that records the weight loss of the plate during the meal and makes up a curve of food intake, meal duration and rate of eating modeled by a quadratic equation. The purpose of the method is to change eating behavior by providing visual feedback on the computer screen that the subject can adapt to because her/his own rate of eating appears on the screen during the meal. The data generated by the method is automatically analyzed and fitted to the quadratic equation using a custom made algorithm. The method has the advantage of recording eating behavior objectively and offers the possibility of changing eating behavior both in experiments and in clinical practice. A limitation may be that experimental subjects are affected by the method. The same limitation may be an advantage in clinical practice, as eating behavior is more easily stabilized by the method. A treatment that uses this method has normalized body weight and restored the health of several hundred patients with anorexia nervosa and other eating disorders and has reduced the weight and improved the health of severely overweight patients.

@article{Esfandiari2018,
author={Maryam Esfandiari and Vasilis Papapanagiotou and Christos Diou and Modjtaba Zandian and Jenny Nolstam and Per Södersten and Cecilia Bergh},
title={Control of Eating Behavior Using a Novel Feedback System},
journal={JoVE},
number={135},
year={2018},
month={05},
date={2018-05-08},
doi={http://10.3791/57432},
abstract={Subjects eat food from a plate that sits on a scale connected to a computer that records the weight loss of the plate during the meal and makes up a curve of food intake, meal duration and rate of eating modeled by a quadratic equation. The purpose of the method is to change eating behavior by providing visual feedback on the computer screen that the subject can adapt to because her/his own rate of eating appears on the screen during the meal. The data generated by the method is automatically analyzed and fitted to the quadratic equation using a custom made algorithm. The method has the advantage of recording eating behavior objectively and offers the possibility of changing eating behavior both in experiments and in clinical practice. A limitation may be that experimental subjects are affected by the method. The same limitation may be an advantage in clinical practice, as eating behavior is more easily stabilized by the method. A treatment that uses this method has normalized body weight and restored the health of several hundred patients with anorexia nervosa and other eating disorders and has reduced the weight and improved the health of severely overweight patients.}
}

(J)
George Mamalakis, Christos Diou, Andreas Symeonidis and Leonidas Georgiadis
Neural Computing and Applications, 2018 Jul
[Abstract][BibTex]

In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon's file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the {\$}{\$}FI^2DS{\$}{\$}FI2DSfile system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.

@article{Mamalakis2018,
author={George Mamalakis and Christos Diou and Andreas Symeonidis and Leonidas Georgiadis},
title={Of daemons and men: reducing false positive rate in intrusion detection systems with file system footprint analysis},
journal={Neural Computing and Applications},
year={2018},
month={07},
date={2018-07-05},
doi={http://10.1007/s00521-018-3550-x},
issn={1433-3058},
abstract={In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon\'s file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the {\\$}{\\$}FI^2DS{\\$}{\\$}FI2DSfile system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.}
}

(J)
Ioannis Sarafis, Christos Diou and Anastasios Delopoulos
CoRR, abs/1809.06124, 2018 Sep
[Abstract][BibTex][pdf]

Weighted SVM (or fuzzy SVM) is the most widely used SVM variant owning its effectiveness to the use of instance weights. Proper selection of the instance weights can lead to increased generalization performance. In this work, we extend the span error bound theory to weighted SVM and we introduce effective hyperparameter selection methods for the weighted SVM algorithm. The significance of the presented work is that enables the application of span bound and span-rule with weighted SVM. The span bound is an upper bound of the leave-one-out error that can be calculated using a single trained SVM model. This is important since leave-one-out error is an almost unbiased estimator of the test error. Similarly, the span-rule gives the actual value of the leave-one-out error. Thus, one can apply span bound and span-rule as computationally lightweight alternatives of leave-one-out procedure for hyperparameter selection. The main theoretical contributions are: (a) we prove the necessary and sufficient condition for the existence of the span of a support vector in weighted SVM; and (b) we prove the extension of span bound and span-rule to weighted SVM. We experimentally evaluate the span bound and the span-rule for hyperparameter selection and we compare them with other methods that are applicable to weighted SVM: the K-fold cross-validation and the $\xi - \alpha$ bound. Experiments on 14 benchmark data sets and data sets with importance scores for the training instances show that: (a) the condition for the existence of span in weighted SVM is satisfied almost always; (b) the span-rule is the most effective method for weighted SVM hyperparameter selection; (c) the span-rule is the best predictor of the test error in the mean square error sense; and (d) the span-rule is efficient and, for certain problems, it can be calculated faster than K-fold cross-validation.

@article{Sarafis2018CoRR,
author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},
title={Span error bound for weighted SVM with applications in hyperparameter selection (preprint)},
journal={CoRR},
volume={abs/1809.06124},
year={2018},
month={09},
date={2018-09-17},
url={https://arxiv.org/pdf/1809.06124.pdf},
abstract={Weighted SVM (or fuzzy SVM) is the most widely used SVM variant owning its effectiveness to the use of instance weights. Proper selection of the instance weights can lead to increased generalization performance. In this work, we extend the span error bound theory to weighted SVM and we introduce effective hyperparameter selection methods for the weighted SVM algorithm. The significance of the presented work is that enables the application of span bound and span-rule with weighted SVM. The span bound is an upper bound of the leave-one-out error that can be calculated using a single trained SVM model. This is important since leave-one-out error is an almost unbiased estimator of the test error. Similarly, the span-rule gives the actual value of the leave-one-out error. Thus, one can apply span bound and span-rule as computationally lightweight alternatives of leave-one-out procedure for hyperparameter selection. The main theoretical contributions are: (a) we prove the necessary and sufficient condition for the existence of the span of a support vector in weighted SVM; and (b) we prove the extension of span bound and span-rule to weighted SVM. We experimentally evaluate the span bound and the span-rule for hyperparameter selection and we compare them with other methods that are applicable to weighted SVM: the K-fold cross-validation and the $\\xi - \\alpha$ bound. Experiments on 14 benchmark data sets and data sets with importance scores for the training instances show that: (a) the condition for the existence of span in weighted SVM is satisfied almost always; (b) the span-rule is the most effective method for weighted SVM hyperparameter selection; (c) the span-rule is the best predictor of the test error in the mean square error sense; and (d) the span-rule is efficient and, for certain problems, it can be calculated faster than K-fold cross-validation.}
}

(J)
Vasilis Papapanagiotou, Christos Diou, Ioannis Ioakimidis, Per Sodersten and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics, PP, (99), pp. 1-1, 2018 Mar
[Abstract][BibTex][pdf]

The structure of the cumulative food intake (CFI) curve has been associated with obesity and eating disorders. Scales that record the weight loss of a plate from which a subject eats food are used for capturing this curve; however, their measurements are contaminated by additive noise and are distorted by certain types of artifacts. This paper presents an algorithm for automatically processing continuous in-meal weight measurements in order to extract the clean CFI curve and in-meal eating indicators, such as total food intake and food intake rate. The algorithm relies on the representation of the weight-time series by a string of symbols that correspond to events such as bites or food additions. A context-free grammar is next used to model a meal as a sequence of such events. The selection of the most likely parse tree is finally used to determine the predicted eating sequence. The algorithm is evaluated on a dataset of 113 meals collected using the Mandometer, a scale that continuously samples plate weight during eating. We evaluate the effectiveness for seven indicators, and for bite-instance detection. We compare our approach with three state-of-the-art algorithms, and achieve the lowest error rates for most indicators (24 g for total meal weight). The proposed algorithm extracts the parameters of the CFI curve automatically, eliminating the need for manual data processing, and thus facilitating large-scale studies of eating behavior.

@article{Vassilis2018,
author={Vasilis Papapanagiotou and Christos Diou and Ioannis Ioakimidis and Per Sodersten and Anastasios Delopoulos},
title={Automatic analysis of food intake and meal microstructure based on continuous weight measurements},
journal={IEEE Journal of Biomedical and Health Informatics},
volume={PP},
number={99},
pages={1-1},
year={2018},
month={03},
date={2018-03-05},
url={http://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2018automated.pdf},
doi={http://10.1109/JBHI.2018.2812243},
abstract={The structure of the cumulative food intake (CFI) curve has been associated with obesity and eating disorders. Scales that record the weight loss of a plate from which a subject eats food are used for capturing this curve; however, their measurements are contaminated by additive noise and are distorted by certain types of artifacts. This paper presents an algorithm for automatically processing continuous in-meal weight measurements in order to extract the clean CFI curve and in-meal eating indicators, such as total food intake and food intake rate. The algorithm relies on the representation of the weight-time series by a string of symbols that correspond to events such as bites or food additions. A context-free grammar is next used to model a meal as a sequence of such events. The selection of the most likely parse tree is finally used to determine the predicted eating sequence. The algorithm is evaluated on a dataset of 113 meals collected using the Mandometer, a scale that continuously samples plate weight during eating. We evaluate the effectiveness for seven indicators, and for bite-instance detection. We compare our approach with three state-of-the-art algorithms, and achieve the lowest error rates for most indicators (24 g for total meal weight). The proposed algorithm extracts the parameters of the CFI curve automatically, eliminating the need for manual data processing, and thus facilitating large-scale studies of eating behavior.}
}

2018

(M)
Christos Diou, Ioannis Ioakeimidis, Evangelia Charmandari, Penio Kassaric, Irini Lekka, Monica Mars, Cecilia Bergh, Tahar Kechadi, Gerardine Doyle, Grace O’Malley, Rachel Heimeier, Anna Karin Lindroos, Sofoklis Sotiriou, Evangelia Koukoula, Sergio Guillén, George Lymperopoulos, Nicos Maglaveras and Anastasios Delopoulos
Athens, Greece, 2018 Sep
[Abstract][BibTex]

Background: Childhood obesity is a major global and European public health problem. The need for community-targeted actions has long been recognized, however it has been prevented by the lack of monitoring and evaluation framework, and the methodological inability to objectively quantify the local community characteristics in a reasonable timeframe. Recent technological achievements in mobile and wearable electronics and Big Data infrastructures allow the engagement of European citizens in the data collection process.

@misc{Diou2018ESPE,
author={Christos Diou and Ioannis Ioakeimidis and Evangelia Charmandari and Penio Kassaric and Irini Lekka and Monica Mars and Cecilia Bergh and Tahar Kechadi and Gerardine Doyle and Grace O’Malley and Rachel Heimeier and Anna Karin Lindroos and Sofoklis Sotiriou and Evangelia Koukoula and Sergio Guillén and George Lymperopoulos and Nicos Maglaveras and Anastasios Delopoulos},
title={BigO: Big Data Against Childhood Obesity},
howpublished={57th Annual ESPE},
address={Athens, Greece},
year={2018},
month={09},
date={2018-09-27},
abstract={Background: Childhood obesity is a major global and European public health problem. The need for community-targeted actions has long been recognized, however it has been prevented by the lack of monitoring and evaluation framework, and the methodological inability to objectively quantify the local community characteristics in a reasonable timeframe. Recent technological achievements in mobile and wearable electronics and Big Data infrastructures allow the engagement of European citizens in the data collection process.}
}

2018

(C)
Konstantinos Kyritsis, Christos Diou and Anastasios Delopoulos
40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, Honolulu, HI, USA, 2018 Oct
[Abstract][BibTex][pdf]

In this paper, we propose an end-to-end neural network (NN) architecture for detecting in-meal eating events (i.e., bites), using only a commercially available smartwatch. Our method combines convolutional and recurrent networks and is able to simultaneously learn intermediate data representations related to hand movements, as well as sequences of these movements that appear during eating. A promising F-score of 0.884 is achieved for detecting bites on a publicly available dataset with 10 subjects.

@conference{Kiritsis2018,
author={Konstantinos Kyritsis and Christos Diou and Anastasios Delopoulos},
title={End-to-end Learning for Measuring in-meal Eating Behavior from a Smartwatch},
booktitle={40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
publisher={IEEE},
address={Honolulu, HI, USA},
year={2018},
month={10},
date={2018-10-29},
url={http://mug.ee.auth.gr/wp-content/uploads/kyritsis2018end.pdf},
doi={http://%2010.1109/EMBC.2018.8513627},
issn={1558-4615},
isbn={978-1-5386-3647-3},
abstract={In this paper, we propose an end-to-end neural network (NN) architecture for detecting in-meal eating events (i.e., bites), using only a commercially available smartwatch. Our method combines convolutional and recurrent networks and is able to simultaneously learn intermediate data representations related to hand movements, as well as sequences of these movements that appear during eating. A promising F-score of 0.884 is achieved for detecting bites on a publicly available dataset with 10 subjects.}
}

2017

(J)
Billy Langlet, Anna Anvret, Christos Maramis, Ioannis Moulos, Vasileios Papapanagiotou, Christos Diou, Eirini Lekka, Rachel Heimeier, Anastasios Delopoulos and Ioannis Ioakimidis
Behaviour & Information Technology, 36, (10), pp. 1005-1013, 2017 May
[Abstract][BibTex][pdf]

Studying eating behaviours is important in the fields of eating disorders and obesity. However, the current methodologies of quantifying eating behaviour in a real-life setting are lacking, either in reliability (e.g. self-reports) or in scalability. In this descriptive study, we deployed previously evaluated laboratory-based methodologies in a Swedish high school, using the Mandometer®, together with video cameras and a dedicated mobile app in order to record eating behaviours in a sample of 41 students, 16–17 years old. Without disturbing the normal school life, we achieved a 97% data-retention rate, using methods fully accepted by the target population. The overall eating style of the students was similar across genders, with male students eating more than females, during lunches of similar lengths. While both groups took similar number of bites, males took larger bites across the meal. Interestingly, the recorded school lunches were as long as lunches recorded in a laboratory setting, which is characterised by the absence of social interactions and direct access to additional food. In conclusion, a larger scale use of our methods is feasible, but more hypotheses-based studies are needed to fully describe and evaluate the interactions between the school environment and the recorded eating behaviours.

@article{Langlet2017,
author={Billy Langlet and Anna Anvret and Christos Maramis and Ioannis Moulos and Vasileios Papapanagiotou and Christos Diou and Eirini Lekka and Rachel Heimeier and Anastasios Delopoulos and Ioannis Ioakimidis},
title={Objective measures of eating behaviour in a Swedish high school},
journal={Behaviour & Information Technology},
volume={36},
number={10},
pages={1005-1013},
year={2017},
month={05},
date={2017-05-06},
url={https://doi.org/10.1080/0144929X.2017.1322146},
doi={http://10.1080/0144929X.2017.1322146},
abstract={Studying eating behaviours is important in the fields of eating disorders and obesity. However, the current methodologies of quantifying eating behaviour in a real-life setting are lacking, either in reliability (e.g. self-reports) or in scalability. In this descriptive study, we deployed previously evaluated laboratory-based methodologies in a Swedish high school, using the Mandometer®, together with video cameras and a dedicated mobile app in order to record eating behaviours in a sample of 41 students, 16–17 years old. Without disturbing the normal school life, we achieved a 97% data-retention rate, using methods fully accepted by the target population. The overall eating style of the students was similar across genders, with male students eating more than females, during lunches of similar lengths. While both groups took similar number of bites, males took larger bites across the meal. Interestingly, the recorded school lunches were as long as lunches recorded in a laboratory setting, which is characterised by the absence of social interactions and direct access to additional food. In conclusion, a larger scale use of our methods is feasible, but more hypotheses-based studies are needed to fully describe and evaluate the interactions between the school environment and the recorded eating behaviours.}
}

2017

(C)
Vasilis Papapanagiotou, Christos Diou, Lingjuan Zhou, Janet van den Boer, Monica Mars and Anastasios Delopoulos
2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 817-820, IEEE, 2017 Jul
[Abstract][BibTex][pdf]

Monitoring of eating behavior using wearable technology is receiving increased attention, driven by the recent advances in wearable devices and mobile phones. One particularly interesting aspect of eating behavior is the monitoring of chewing activity and eating occurrences. There are several chewing sensor types and chewing detection algorithms proposed in the bibliography, however no datasets are publicly available to facilitate evaluation and further research. In this paper, we present a multi-modal dataset of over 60 hours of recordings from 14 participants in semi-free living conditions, collected in the context of the SPLENDID project. The dataset includes raw signals from a photoplethysmography (PPG) sensor and a 3D accelerometer, and a set of extracted features from audio recordings; detailed annotations and ground truth are also provided both at eating event level and at individual chew level. We also provide a baseline evaluation method, and introduce the “challenge” of improving the baseline chewing detection algorithms. The dataset can be downloaded from http: //dx.doi.org/10.17026/dans-zxw-v8gy, and supplementary code can be downloaded from https://github. com/mug-auth/chewing-detection-challenge.git.

@inproceedings{8036949,
author={Vasilis Papapanagiotou and Christos Diou and Lingjuan Zhou and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={The SPLENDID chewing detection challenge},
booktitle={2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={817-820},
publisher={IEEE},
year={2017},
month={07},
date={2017-07-01},
url={http://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2017splendid.pdf},
doi={http://10.1109/EMBC.2017.8036949},
abstract={Monitoring of eating behavior using wearable technology is receiving increased attention, driven by the recent advances in wearable devices and mobile phones. One particularly interesting aspect of eating behavior is the monitoring of chewing activity and eating occurrences. There are several chewing sensor types and chewing detection algorithms proposed in the bibliography, however no datasets are publicly available to facilitate evaluation and further research. In this paper, we present a multi-modal dataset of over 60 hours of recordings from 14 participants in semi-free living conditions, collected in the context of the SPLENDID project. The dataset includes raw signals from a photoplethysmography (PPG) sensor and a 3D accelerometer, and a set of extracted features from audio recordings; detailed annotations and ground truth are also provided both at eating event level and at individual chew level. We also provide a baseline evaluation method, and introduce the “challenge” of improving the baseline chewing detection algorithms. The dataset can be downloaded from http: //dx.doi.org/10.17026/dans-zxw-v8gy, and supplementary code can be downloaded from https://github. com/mug-auth/chewing-detection-challenge.git.}
}

(C)
Vasilis Papapanagiotou, Christos Diou and Anastasios Delopoulos
2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1258-1261, 2017 Jul
[Abstract][BibTex][pdf]

Detecting chewing sounds from a microphone placed inside the outer ear for eating behaviour monitoring still remains a challenging task. This is mainly due the difficulty in discriminating non-chewing sounds (e.g. speech or sounds caused by walking) from chews, as well as due to to the high variability of the chewing sounds of different food types. Most approaches rely on detecting distictive structures on the sound wave, or on extracting a set of features and using a classifier to detect chews. In this work, we propose to use feature-learning in the time domain with 1-dimensional convolutional neural networks for for chewing detection. We apply a network of convolutional layers followed by fully connected layers directly on windows of the audio samples to detect chewing activity, and then aggregate individual chews to eating events. Experimental results on a large, semi-free living dataset collected in the context of the SPLENDID project indicate high effectiveness, with an accuracy of 0.980 and F1 score of 0.883.

@inproceedings{8037060,
author={Vasilis Papapanagiotou and Christos Diou and Anastasios Delopoulos},
title={Chewing detection from an in-ear microphone using convolutional neural networks},
booktitle={2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={1258-1261},
year={2017},
month={07},
date={2017-07-01},
url={http://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2017chewing.pdf},
doi={http://10.1109/EMBC.2017.8037060},
abstract={Detecting chewing sounds from a microphone placed inside the outer ear for eating behaviour monitoring still remains a challenging task. This is mainly due the difficulty in discriminating non-chewing sounds (e.g. speech or sounds caused by walking) from chews, as well as due to to the high variability of the chewing sounds of different food types. Most approaches rely on detecting distictive structures on the sound wave, or on extracting a set of features and using a classifier to detect chews. In this work, we propose to use feature-learning in the time domain with 1-dimensional convolutional neural networks for for chewing detection. We apply a network of convolutional layers followed by fully connected layers directly on windows of the audio samples to detect chewing activity, and then aggregate individual chews to eating events. Experimental results on a large, semi-free living dataset collected in the context of the SPLENDID project indicate high effectiveness, with an accuracy of 0.980 and F1 score of 0.883.}
}

(C)
Konstantinos Kyritsis, Christina L. Tatli, Christos Diou and Aanastasios Delopoulos
2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2843-2846, IEEE, Seogwipo, South Korea, 2017 Jul
[Abstract][BibTex][pdf]

Automatic objective monitoring of eating behavior using inertial sensors is a research problem that has received a lot of attention recently, mainly due to the mass availability of IMUs and the evidence on the importance of quantifying and monitoring eating patterns. In this paper we propose a method for detecting food intake cycles during the course of a meal using a commercially available wristband. We first model micro-movements that are part of the intake cycle and then use HMMs to model the sequences of micro-movements leading to mouthfuls. Evaluation is carried out on an annotated dataset of 8 subjects where the proposed method achieves 0:78 precision and 0:77 recall. The evaluation dataset is publicly available at http://mug.ee.auth.gr/intake-cycle-detection/.

@inproceedings{8037449,
author={Konstantinos Kyritsis and Christina L. Tatli and Christos Diou and Aanastasios Delopoulos},
title={Automated analysis of in meal eating behavior using a commercial wristband IMU sensor},
booktitle={2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={2843-2846},
publisher={IEEE},
address={Seogwipo, South Korea},
year={2017},
month={07},
date={2017-07-01},
url={http://mug.ee.auth.gr/wp-content/uploads/kyritsis2017automated.pdf},
doi={http://10.1109/EMBC.2017.8037449},
abstract={Automatic objective monitoring of eating behavior using inertial sensors is a research problem that has received a lot of attention recently, mainly due to the mass availability of IMUs and the evidence on the importance of quantifying and monitoring eating patterns. In this paper we propose a method for detecting food intake cycles during the course of a meal using a commercially available wristband. We first model micro-movements that are part of the intake cycle and then use HMMs to model the sequences of micro-movements leading to mouthfuls. Evaluation is carried out on an annotated dataset of 8 subjects where the proposed method achieves 0:78 precision and 0:77 recall. The evaluation dataset is publicly available at http://mug.ee.auth.gr/intake-cycle-detection/.}
}

(C)
Christos Diou, Ioannis Sarafis, Ioannis Ioakimidis and Anastasios Delopoulos
"Data-driven assessments for sensor measurements of eating behavior"
Biomedical & Health Informatics (BHI), 2017 IEEE EMBS International Conference on, pp. 129-132, 2017 Jan
[Abstract][BibTex][pdf]

Two major challenges in sensor-based measurement and assessment of healthy eating behavior are (a) choosing the behavioral indicators to be measured, and (b) interpreting the measured values. While much of the work towards solving these problems belongs in the domain of behavioral science, there are several areas where technology can help. This paper outlines an approach for representing and interpreting eating and activity behavior based on sensor measurements and data available from a reference population. The main idea is to assess the “similarity” of an individual\'s behavior to previous data recordings of a relevant reference population. Thus, by appropriate selection of the indicators and reference data it is possible to perform comparative behavioral evaluation and support decisions, even in cases where no clear medical guidelines for the indicator values exist. We examine the simple, univariate case (one indicator) and then extend these ideas to the multivariate problem (several indicators) using one-class SVM to measure the distance from the reference population.

@inproceedings{diou2017data,
author={Christos Diou and Ioannis Sarafis and Ioannis Ioakimidis and Anastasios Delopoulos},
title={Data-driven assessments for sensor measurements of eating behavior},
booktitle={Biomedical & Health Informatics (BHI), 2017 IEEE EMBS International Conference on},
pages={129-132},
year={2017},
month={01},
date={2017-01-01},
url={http://ieeexplore.ieee.org/document/7897222/},
abstract={Two major challenges in sensor-based measurement and assessment of healthy eating behavior are (a) choosing the behavioral indicators to be measured, and (b) interpreting the measured values. While much of the work towards solving these problems belongs in the domain of behavioral science, there are several areas where technology can help. This paper outlines an approach for representing and interpreting eating and activity behavior based on sensor measurements and data available from a reference population. The main idea is to assess the “similarity” of an individual\\'s behavior to previous data recordings of a relevant reference population. Thus, by appropriate selection of the indicators and reference data it is possible to perform comparative behavioral evaluation and support decisions, even in cases where no clear medical guidelines for the indicator values exist. We examine the simple, univariate case (one indicator) and then extend these ideas to the multivariate problem (several indicators) using one-class SVM to measure the distance from the reference population.}
}

(C)
Angelos Katharopoulos, Despoina Paschalidou, Christos Diou and Anastasios Delopoulos
"Learning local feature aggregation functions with backpropagation"
25th European Signal Processing Conference (EUSIPCO), pp. 748-752, IEEE, Kos, Greece, 2017 Aug
[Abstract][BibTex][pdf]

This paper introduces a family of local feature aggregation functions and a novel method to estimate their parameters, such that they generate optimal representations for classification (or any task that can be expressed as a cost function minimization problem). To achieve that, we compose the local feature aggregation function with the classifier cost function and we backpropagate the gradient of this cost function in order to update the local feature aggregation function parameters. Experiments on synthetic datasets indicate that our method discovers parameters that model the class-relevant information in addition to the local feature space. Further experiments on a variety of motion and visual descriptors, both on image and video datasets, show that our method outperforms other state-of-the-art local feature aggregation functions, such as Bag of Words, Fisher Vectors and VLAD, by a large margin.

@inproceedings{Katharopoulos2017,
author={Angelos Katharopoulos and Despoina Paschalidou and Christos Diou and Anastasios Delopoulos},
title={Learning local feature aggregation functions with backpropagation},
booktitle={25th European Signal Processing Conference (EUSIPCO)},
pages={748-752},
publisher={IEEE},
address={Kos, Greece},
year={2017},
month={08},
date={2017-08-28},
url={https://arxiv.org/pdf/1706.08580.pdf},
doi={http://10.23919/EUSIPCO.2017.8081307},
abstract={This paper introduces a family of local feature aggregation functions and a novel method to estimate their parameters, such that they generate optimal representations for classification (or any task that can be expressed as a cost function minimization problem). To achieve that, we compose the local feature aggregation function with the classifier cost function and we backpropagate the gradient of this cost function in order to update the local feature aggregation function parameters. Experiments on synthetic datasets indicate that our method discovers parameters that model the class-relevant information in addition to the local feature space. Further experiments on a variety of motion and visual descriptors, both on image and video datasets, show that our method outperforms other state-of-the-art local feature aggregation functions, such as Bag of Words, Fisher Vectors and VLAD, by a large margin.}
}

(C)
Konstantinos Kyritsis, Christos Diou and Anastasios Delopoulos
New Trends in Image Analysis and Processing -- ICIAP 2017: ICIAP International Workshops, pp. 411-418, Springer International Publishing, Catania, Italy, 2017 Sep
[Abstract][BibTex][pdf]

Unobtrusive analysis of eating behavior based on Inertial Measurement Unit (IMU) sensors (e.g. accelerometer) is a topic that has attracted the interest of both the industry and the research community over the past years. This work presents a method for detecting food intake moments that occur during a meal session using the accelerometer and gyroscope signals of an off-the-shelf smartwatch. We propose a two step approach. First, we model the hand micro-movements that take place while eating using an array of binary Support Vector Machines (SVMs); then the detection of intake moments is achieved by processing the sequence of SVM score vectors by a Long Short Term Memory (LSTM) network. Evaluation is performed on a publicly available dataset with 10 subjects, where the proposed method outperforms similar approaches by achieving an F1 score of 0.892.

@inproceedings{Kyritsis2017ICIAP,
author={Konstantinos Kyritsis and Christos Diou and Anastasios Delopoulos},
title={Food Intake Detection from Inertial Sensors Using LSTM Networks},
booktitle={New Trends in Image Analysis and Processing -- ICIAP 2017: ICIAP International Workshops},
pages={411-418},
publisher={Springer International Publishing},
editor={Battiato, Sebastiano and Farinella, Giovanni Maria and Leo, Marco and Gallo, Giovanni},
address={Catania, Italy},
year={2017},
month={09},
date={2017-09-11},
url={https://mug.ee.auth.gr/wp-content/uploads/madima2017.pdf},
doi={http://10.1007/978-3-319-70742-6_39},
keywords={Food intake;Eating monitoring;Wearable sensors;LSTM},
abstract={Unobtrusive analysis of eating behavior based on Inertial Measurement Unit (IMU) sensors (e.g. accelerometer) is a topic that has attracted the interest of both the industry and the research community over the past years. This work presents a method for detecting food intake moments that occur during a meal session using the accelerometer and gyroscope signals of an off-the-shelf smartwatch. We propose a two step approach. First, we model the hand micro-movements that take place while eating using an array of binary Support Vector Machines (SVMs); then the detection of intake moments is achieved by processing the sequence of SVM score vectors by a Long Short Term Memory (LSTM) network. Evaluation is performed on a publicly available dataset with 10 subjects, where the proposed method outperforms similar approaches by achieving an F1 score of 0.892.}
}

2016

(J)
Vasilis Papapanagiotou, Christos Diou, Lingchuan Zhou, Janet van den Boer, Monica Mars and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics, PP, (99), pp. 1-1, 2016 Jan
[Abstract][BibTex][pdf]

In the context of dietary management, accurate monitoring of eating habits is receiving increased attention. Wearable sensors, combined with the connectivity and processing of modern smart phones, can be used to robustly extract objective, and real-time measurements of human behaviour. In particular, for the task of chewing detection, several approaches based on an in-ear microphone can be found in the literature, while other types of sensors have also been reported, such as strain sensors. In this work, performed in the context of the SPLENDID project, we propose to combine an in-ear microphone with a photoplethysmography (PPG) sensor placed in the ear concha, in a new high accuracy and low sampling rate prototype chewing detection system. We propose a pipeline that initially processes each sensor signal separately, and then fuses both to perform the final detection. Features are extracted from each modality, and support vector machine (SVM) classifiers are used separately to perform snacking detection. Finally, we combine the SVM scores from both signals in a late-fusion scheme, which leads to increased eating detection accuracy. We evaluate the proposed eating monitoring system on a challenging, semi-free living dataset of 14 subjects, that includes more than 60 hours of audio and PPG signal recordings. Results show that fusing the audio and PPG signals significantly improves the effectiveness of eating event detection, achieving accuracy up to 0.938 and class-weighted accuracy up to 0.892.

@article{7736096,
author={Vasilis Papapanagiotou and Christos Diou and Lingchuan Zhou and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={A novel chewing detection system based on PPG, audio and accelerometry},
journal={IEEE Journal of Biomedical and Health Informatics},
volume={PP},
number={99},
pages={1-1},
year={2016},
month={01},
date={2016-01-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2017novel.pdf},
doi={http://10.1109/JBHI.2016.2625271},
keywords={Ear;Informatics;Microphones;Monitoring;Sensor systems;Signal processing algorithms},
abstract={In the context of dietary management, accurate monitoring of eating habits is receiving increased attention. Wearable sensors, combined with the connectivity and processing of modern smart phones, can be used to robustly extract objective, and real-time measurements of human behaviour. In particular, for the task of chewing detection, several approaches based on an in-ear microphone can be found in the literature, while other types of sensors have also been reported, such as strain sensors. In this work, performed in the context of the SPLENDID project, we propose to combine an in-ear microphone with a photoplethysmography (PPG) sensor placed in the ear concha, in a new high accuracy and low sampling rate prototype chewing detection system. We propose a pipeline that initially processes each sensor signal separately, and then fuses both to perform the final detection. Features are extracted from each modality, and support vector machine (SVM) classifiers are used separately to perform snacking detection. Finally, we combine the SVM scores from both signals in a late-fusion scheme, which leads to increased eating detection accuracy. We evaluate the proposed eating monitoring system on a challenging, semi-free living dataset of 14 subjects, that includes more than 60 hours of audio and PPG signal recordings. Results show that fusing the audio and PPG signals significantly improves the effectiveness of eating event detection, achieving accuracy up to 0.938 and class-weighted accuracy up to 0.892.}
}

(J)
Antonios Chrysopoulos, Christos Diou, Andreas L. Symeonidis and Pericles A. Mitkas
"Response modeling of small-scale energy consumers for effective demand response applications"
Electric Power Systems Research, 132, pp. 78-93, 2016 Mar
[Abstract][BibTex][pdf]

Abstract The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.

@article{Chrysopoulos2016Response,
author={Antonios Chrysopoulos and Christos Diou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Response modeling of small-scale energy consumers for effective demand response applications},
journal={Electric Power Systems Research},
volume={132},
pages={78-93},
year={2016},
month={03},
date={2016-03-01},
url={http://www.sciencedirect.com/science/article/pii/S0378779615003223},
doi={http://10.1016/j.epsr.2015.10.026},
abstract={Abstract The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.}
}

(J)
Vasileios Papapanagiotou, Christos Diou and Anastasios Delopoulos
ACM Transactions on Multimedia Computing, Communications, and Applications, 12, (2), 2016 Mar
[Abstract][BibTex][pdf]

This article presents a novel approach to training classifiers for concept detection using tags and a variant of Support Vector Machine that enables the usage of training weights per sample. Combined with an appropriate tag weighting mechanism, more relevant samples play a more important role in the calibration of the final concept-detector model. We propose a complete, automated framework that (i) calculates relevance scores for each image-concept pair based on image tags, (ii) transforms the scores into relevance probabilities and automatically annotates each image according to this probability, (iii) transforms either the relevance scores or the probabilities into appropriate training weights and finally, (iv) incorporates the training weights and the visual features into a Fuzzy Support Vector Machine classifier to build the concept-detector model. The framework can be applied to online public collections, by gathering a large pool of diverse images, and using the calculated probability to select a training set and the associated training weights. To evaluate our argument, we experiment on two large annotated datasets. Experiments highlight the retrieval effectiveness of the proposed approach. Furthermore, experiments with various levels of annotation error show that using weights derived from tags significantly increases the robustness of the resulting concept detectors.

@article{Papapanagiotou2016Improving,
author={Vasileios Papapanagiotou and Christos Diou and Anastasios Delopoulos},
title={Improving Concept-Based Image Retrieval with Training Weights Computed from Tags},
journal={ACM Transactions on Multimedia Computing, Communications, and Applications},
volume={12},
number={2},
year={2016},
month={03},
date={2016-03-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2016improving.pdf},
doi={http://10.1145/2790230},
abstract={This article presents a novel approach to training classifiers for concept detection using tags and a variant of Support Vector Machine that enables the usage of training weights per sample. Combined with an appropriate tag weighting mechanism, more relevant samples play a more important role in the calibration of the final concept-detector model. We propose a complete, automated framework that (i) calculates relevance scores for each image-concept pair based on image tags, (ii) transforms the scores into relevance probabilities and automatically annotates each image according to this probability, (iii) transforms either the relevance scores or the probabilities into appropriate training weights and finally, (iv) incorporates the training weights and the visual features into a Fuzzy Support Vector Machine classifier to build the concept-detector model. The framework can be applied to online public collections, by gathering a large pool of diverse images, and using the calculated probability to select a training set and the associated training weights. To evaluate our argument, we experiment on two large annotated datasets. Experiments highlight the retrieval effectiveness of the proposed approach. Furthermore, experiments with various levels of annotation error show that using weights derived from tags significantly increases the robustness of the resulting concept detectors.}
}

(J)
Ioannis Sarafis, Christos Diou and Anastasios Delopoulos
"Online training of concept detectors for image retrieval using streaming clickthrough data"
Engineering Applications of Artificial Intelligence, 51, pp. 150-162, 2016 Jan
[Abstract][BibTex][pdf]

Clickthrough data from image search engines provide a massive and continuously generated source of user feedback that can be used to model how the search engine users perceive the visual content. Image clickthrough data have been successfully used to build concept detectors without any manual annotation effort, although the generated annotations suffer from labeling errors. Previous research efforts therefore focused on modeling the sample uncertainty in order to improve concept detector effectiveness. In this paper, we study the problem in an online learning setting using streaming clickthrough data where each click is treated seperately when it becomes available; the concept detector model is therefore continuously updated without batch retraining. We argue that sample uncertainty can be incorporated in the online learning setting by exploiting the repetitions of incoming clicks at the classifier level, where these act as an implicit importance weighting mechanism. For online concept detector training we use the LASVM algorithm. The inferred weighting approximates the solution of batch trained concept detectors using weighted SVM variants that are known to achieve improved performance and high robustness to noise compared to the standard SVM. Furthermore, we evaluate methods for selecting negative samples using a small number of candidates sampled locally from the incoming stream of clicks. The selection criteria aim at drastically improving the performance and the convergence speed of the online concept detectors. To validate our arguments we conduct experiments for 30 concepts on the Clickture-Lite dataset. The experimental results demonstrate that: (a) the proposed online approach produces effective and noise resilient concept detectors that can take advantage of streaming clickthrough data and achieve performance that is equivalent to Fuzzy SVM concept detectors with sample weights and 78.6% improved compared to standard SVM concept detectors; and (b) the selection criteria speed up convergence and improve effectiveness compared to random negative sampling even for a small number of available clicks (up to 134% after 100 clicks).

@article{Sarafis2016Online,
author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},
title={Online training of concept detectors for image retrieval using streaming clickthrough data},
journal={Engineering Applications of Artificial Intelligence},
volume={51},
pages={150-162},
year={2016},
month={01},
date={2016-01-29},
url={http://www.sciencedirect.com/science/article/pii/S095219761600021X},
doi={http://dx.doi.org/10.1016/j.engappai.2016.01.017},
keywords={Clickthrough data;Online learning;Image retrieval;Label noise;Fuzzy SVM;LASVM},
abstract={Clickthrough data from image search engines provide a massive and continuously generated source of user feedback that can be used to model how the search engine users perceive the visual content. Image clickthrough data have been successfully used to build concept detectors without any manual annotation effort, although the generated annotations suffer from labeling errors. Previous research efforts therefore focused on modeling the sample uncertainty in order to improve concept detector effectiveness. In this paper, we study the problem in an online learning setting using streaming clickthrough data where each click is treated seperately when it becomes available; the concept detector model is therefore continuously updated without batch retraining. We argue that sample uncertainty can be incorporated in the online learning setting by exploiting the repetitions of incoming clicks at the classifier level, where these act as an implicit importance weighting mechanism. For online concept detector training we use the LASVM algorithm. The inferred weighting approximates the solution of batch trained concept detectors using weighted SVM variants that are known to achieve improved performance and high robustness to noise compared to the standard SVM. Furthermore, we evaluate methods for selecting negative samples using a small number of candidates sampled locally from the incoming stream of clicks. The selection criteria aim at drastically improving the performance and the convergence speed of the online concept detectors. To validate our arguments we conduct experiments for 30 concepts on the Clickture-Lite dataset. The experimental results demonstrate that: (a) the proposed online approach produces effective and noise resilient concept detectors that can take advantage of streaming clickthrough data and achieve performance that is equivalent to Fuzzy SVM concept detectors with sample weights and 78.6% improved compared to standard SVM concept detectors; and (b) the selection criteria speed up convergence and improve effectiveness compared to random negative sampling even for a small number of available clicks (up to 134% after 100 clicks).}
}

2016

(C)
Vasilis Papapanagiotou, Christos Diou, Lingchuan Zhou, Janet van den Boer, Monica Mars and Anastasios Delopoulos
2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6485-6488, 2016 Aug
[Abstract][BibTex][pdf]

Monitoring of human eating behaviour has been attracting interest over the last few years, as a means to a healthy lifestyle, but also due to its association with serious health conditions, such as eating disorders and obesity. Use of self-reports and other non-automated means of monitoring have been found to be unreliable, compared to the use of wearable sensors. Various modalities have been reported, such as acoustic signal from ear-worn microphones, or signal from wearable strain sensors. In this work, we introduce a new sensor for the task of chewing detection, based on a novel photoplethysmography (PPG) sensor placed on the outer earlobe to perform the task. We also present a processing pipeline that includes two chewing detection algorithms from literature and one new algorithm, to process the captured PPG signal, and present their effectiveness. Experiments are performed on an annotated dataset recorded from 21 individuals, including more than 10 hours of eating and non-eating activities. Results show that the PPG sensor can be successfully used to support dietary monitoring.

@inproceedings{7592214,
author={Vasilis Papapanagiotou and Christos Diou and Lingchuan Zhou and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={A novel approach for chewing detection based on a wearable PPG sensor},
booktitle={2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={6485-6488},
year={2016},
month={08},
date={2016-08-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2016novel.pdf},
doi={http://10.1109/EMBC.2016.7592214},
keywords={Ear;Microphones;Monitoring;Medical disorders;Optical sensors;Patient monitoring;Photoplethysmography;PPG signal;Acoustic signal;Chewing detection;Dietary monitoring;Ear-worn microphone;Eating disorder;Health condition;Human eating behaviour monitoring;Noneating Activity;Obesity;Wearable PPG sensor;Wearable sensor;Wearable strain sensor;Light emitting diodes;Muscles;Pipelines;Prototypes},
abstract={Monitoring of human eating behaviour has been attracting interest over the last few years, as a means to a healthy lifestyle, but also due to its association with serious health conditions, such as eating disorders and obesity. Use of self-reports and other non-automated means of monitoring have been found to be unreliable, compared to the use of wearable sensors. Various modalities have been reported, such as acoustic signal from ear-worn microphones, or signal from wearable strain sensors. In this work, we introduce a new sensor for the task of chewing detection, based on a novel photoplethysmography (PPG) sensor placed on the outer earlobe to perform the task. We also present a processing pipeline that includes two chewing detection algorithms from literature and one new algorithm, to process the captured PPG signal, and present their effectiveness. Experiments are performed on an annotated dataset recorded from 21 individuals, including more than 10 hours of eating and non-eating activities. Results show that the PPG sensor can be successfully used to support dietary monitoring.}
}

(C)
Angelos Katharopoulos, Despoina Paschalidou, Christos Diou and Anastasios Delopoulos
"Fast Supervised LDA for discovering micro-events in large-scale video datasets"
In proceedings of the 24th ACM international conference on multimedia (ACM-MM 2016), Amsterdam, The Netherlands, 2016 Oct
[Abstract][BibTex][pdf]

This paper introduces fsLDA, a fast variational inference method for supervised LDA, which overcomes the computational limitations of the original supervised LDA and enables its application in large-scale video datasets. In addition to its scalability, our method also overcomes the drawbacks of standard, unsupervised LDA for video, including its focus on dominant but often irrelevant video information (e.g. background, camera motion). As a result, experiments in the UCF11 and UCF101 datasets show that our method consistently outperforms unsupervised LDA in every metric. Furthermore, analysis shows that class-relevant topics of fsLDA lead to sparse video representations and encapsulate high-level information corresponding to parts of video events, which we denote \'\'micro-events\'\'.

@inproceedings{KatharopoulosACMMM2016,
author={Angelos Katharopoulos and Despoina Paschalidou and Christos Diou and Anastasios Delopoulos},
title={Fast Supervised LDA for discovering micro-events in large-scale video datasets},
booktitle={In proceedings of the 24th ACM international conference on multimedia (ACM-MM 2016)},
address={Amsterdam, The Netherlands},
year={2016},
month={10},
date={2016-10-15},
url={http://mug.ee.auth.gr/wp-content/uploads/fsLDA.pdf},
abstract={This paper introduces fsLDA, a fast variational inference method for supervised LDA, which overcomes the computational limitations of the original supervised LDA and enables its application in large-scale video datasets. In addition to its scalability, our method also overcomes the drawbacks of standard, unsupervised LDA for video, including its focus on dominant but often irrelevant video information (e.g. background, camera motion). As a result, experiments in the UCF11 and UCF101 datasets show that our method consistently outperforms unsupervised LDA in every metric. Furthermore, analysis shows that class-relevant topics of fsLDA lead to sparse video representations and encapsulate high-level information corresponding to parts of video events, which we denote \\'\\'micro-events\\'\\'.}
}

2015

(J)
Ioannis Sarafis, Christos Diou and Anastasios Delopoulos
"Building effective SVM concept detectors from clickthrough data for large-scale image retrieval"
International Journal of Multimedia Information Retrieval, 4, (2), pp. 129-142, 2015 Jun
[Abstract][BibTex][pdf]

Clickthrough data is a source of information that can be used for automatically building concept detectors for image retrieval. Previous studies, however, have shown that in many cases the resulting training sets suffer from severe label noise that has a significant impact in the SVM concept detector performance. This paper evaluates and proposes a set of strategies for automatically building effective concept detectors from clickthrough data. These strategies focus on: (1) automatic training set generation; (2) assignment of label confidence weights to the training samples and (3) using these weights at the classifier level to improve concept detector effectiveness. For training set selection and in order to assign weights to individual training samples three Information Retrieval (IR) models are examined: vector space models, BM25 and language models. Three SVM variants that take into account importance at the classifier level are evaluated and compared to the standard SVM: the Fuzzy SVM, the Power SVM, and the Bilateral-weighted Fuzzy SVM. Experiments conducted on the MM Grand Challenge dataset (consisting of 1M images and 82.3M unique clicks) for 40 concepts demonstrate that (1) on average, all weighted SVM variants are more effective than the standard SVM; (2) the vector space model produces the best training sets and best weights; (3) the Bilateral-weighted Fuzzy SVM produces the best results but is very sensitive to weight assignment and (4) the Fuzzy SVM is the most robust training approach for varying levels of label noise.

@article{Sarafis2015Building,
author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},
title={Building effective SVM concept detectors from clickthrough data for large-scale image retrieval},
journal={International Journal of Multimedia Information Retrieval},
volume={4},
number={2},
pages={129-142},
year={2015},
month={06},
date={2015-06-01},
url={http://link.springer.com/article/10.1007/s13735-015-0080-5},
doi={http://10.1007/s13735-015-0080-5},
abstract={Clickthrough data is a source of information that can be used for automatically building concept detectors for image retrieval. Previous studies, however, have shown that in many cases the resulting training sets suffer from severe label noise that has a significant impact in the SVM concept detector performance. This paper evaluates and proposes a set of strategies for automatically building effective concept detectors from clickthrough data. These strategies focus on: (1) automatic training set generation; (2) assignment of label confidence weights to the training samples and (3) using these weights at the classifier level to improve concept detector effectiveness. For training set selection and in order to assign weights to individual training samples three Information Retrieval (IR) models are examined: vector space models, BM25 and language models. Three SVM variants that take into account importance at the classifier level are evaluated and compared to the standard SVM: the Fuzzy SVM, the Power SVM, and the Bilateral-weighted Fuzzy SVM. Experiments conducted on the MM Grand Challenge dataset (consisting of 1M images and 82.3M unique clicks) for 40 concepts demonstrate that (1) on average, all weighted SVM variants are more effective than the standard SVM; (2) the vector space model produces the best training sets and best weights; (3) the Bilateral-weighted Fuzzy SVM produces the best results but is very sensitive to weight assignment and (4) the Fuzzy SVM is the most robust training approach for varying levels of label noise.}
}

2015

(C)
Vasileios Papapanagiotou, Christos Diou, Billy Langlet, Ioannis Ioakimidis and Anastasios Delopoulos
Bioinformatics and Biomedical Engineering: Third International Conference, IWBBIO 2015, Granada, Spain, April 15-17, 2015. Proceedings, Part II, pp. 35-46, Springer International Publishing, Cham, 2015 Jan
[Abstract][BibTex]

Recent studies and clinical practice have shown that the extraction of detailed eating behaviour indicators is critical in identifying risk factors and/or treating obesity and eating disorders, such as anorexia and bulimia nervosa. A number of single meal analysis methods that have been successfully applied are based on the Mandometer, a weight scale that continuously measures the weight of food on a plate over the course of a meal. Experimental meal analysis is performed using the cumulative food intake curve, which is produced by the semi-automatic processing of the Mandometer weight measurements, in tandem with the video recordings of the eating session. Due to its complexity and the video recording dependence, this process is not suited to a clinical or a real-life setting. In this work, we evaluate a method for automating the extraction of an accurate food intake curve, corrected for food additions during the meal and artificial weight fluctuations, using only the raw Mandometer output. Since the method requires no manual corrections or external video recordings it is appropriate for clinical or free-living use. Three algorithms are presented based on rules, greedy decisioning and exhaustive search, as well as evaluation methods of the Mandometer measurements. Experiments on a set of 114 meals collected from both normal and disordered eaters in a clinical environment illustrate the effectiveness of the proposed approach.

@inproceedings{Papapanagiotou2015Automated,
author={Vasileios Papapanagiotou and Christos Diou and Billy Langlet and Ioannis Ioakimidis and Anastasios Delopoulos},
title={Automated Extraction of Food Intake Indicators from Continuous Meal Weight Measurements},
booktitle={Bioinformatics and Biomedical Engineering: Third International Conference, IWBBIO 2015, Granada, Spain, April 15-17, 2015. Proceedings, Part II},
pages={35-46},
publisher={Springer International Publishing},
editor={Ortu{\~{no, Francisco and Rojas, Ignacio},
address={Cham},
year={2015},
month={01},
date={2015-01-01},
doi={http://10.1007/978-3-319-16480-9_4},
isbn={978-3-319-16480-9},
abstract={Recent studies and clinical practice have shown that the extraction of detailed eating behaviour indicators is critical in identifying risk factors and/or treating obesity and eating disorders, such as anorexia and bulimia nervosa. A number of single meal analysis methods that have been successfully applied are based on the Mandometer, a weight scale that continuously measures the weight of food on a plate over the course of a meal. Experimental meal analysis is performed using the cumulative food intake curve, which is produced by the semi-automatic processing of the Mandometer weight measurements, in tandem with the video recordings of the eating session. Due to its complexity and the video recording dependence, this process is not suited to a clinical or a real-life setting. In this work, we evaluate a method for automating the extraction of an accurate food intake curve, corrected for food additions during the meal and artificial weight fluctuations, using only the raw Mandometer output. Since the method requires no manual corrections or external video recordings it is appropriate for clinical or free-living use. Three algorithms are presented based on rules, greedy decisioning and exhaustive search, as well as evaluation methods of the Mandometer measurements. Experiments on a set of 114 meals collected from both normal and disordered eaters in a clinical environment illustrate the effectiveness of the proposed approach.}
}

(C)
Vasileios Papapanagiotou , Christos Diou, Zhou Lingchuan, Janet van den Boer, Monica Mars and Anastasios Delopoulos
New Trends in Image Analysis and Processing--ICIAP 2015 Workshops, pp. 401-408, 2015 Apr
[Abstract][BibTex][pdf]

In the battle against Obesity as well as Eating Disorders, non-intrusive dietary monitoring has been investigated by many researchers. For this purpose, one of the most promising modalities is the acoustic signal captured by a common microphone placed inside the outer ear canal. Various chewing detection algorithms for this type of signals exist in the literature. In this work, we perform a systematic analysis of the fractal nature of chewing sounds, and find that the Fractal Dimension is substantially different between chewing and talking. This holds even for severely down-sampled versions of the recordings. We derive chewing detectors based on the the fractal dimension of the recorded signals that can clearly discriminate chewing from non-chewing sounds. We experimentally evaluate snacking detection based on the proposed chewing detector, and we compare our approach against well known counterparts. Experimental results on a large dataset of 10 subjects and total recordings duration of more than 8 hours demonstrate the high effectiveness of our method. Furthermore, there exists indication that discrimination between different properties (such as crispness) is possible.

@inproceedings{Papapanagiotou2015Fractal,
author={Vasileios Papapanagiotou and Christos Diou and Zhou Lingchuan and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={Fractal Nature of Chewing Sounds},
booktitle={New Trends in Image Analysis and Processing--ICIAP 2015 Workshops},
pages={401-408},
year={2015},
month={04},
date={2015-04-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2015fractal.pdf},
doi={http://10.1007/978-3-319-23222-5_49},
abstract={In the battle against Obesity as well as Eating Disorders, non-intrusive dietary monitoring has been investigated by many researchers. For this purpose, one of the most promising modalities is the acoustic signal captured by a common microphone placed inside the outer ear canal. Various chewing detection algorithms for this type of signals exist in the literature. In this work, we perform a systematic analysis of the fractal nature of chewing sounds, and find that the Fractal Dimension is substantially different between chewing and talking. This holds even for severely down-sampled versions of the recordings. We derive chewing detectors based on the the fractal dimension of the recorded signals that can clearly discriminate chewing from non-chewing sounds. We experimentally evaluate snacking detection based on the proposed chewing detector, and we compare our approach against well known counterparts. Experimental results on a large dataset of 10 subjects and total recordings duration of more than 8 hours demonstrate the high effectiveness of our method. Furthermore, there exists indication that discrimination between different properties (such as crispness) is possible.}
}

(C)
Vasileios Papapanagiotou, Christos Diou, Billy Langlet, Ioannis Ioakimidis and Anastasios Delopoulos
2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 7853-7856, IEEE, 2015 Aug
[Abstract][BibTex][pdf]

Monitoring and modification of eating behaviour through continuous meal weight measurements has been successfully applied in clinical practice to treat obesity and eating disorders. For this purpose, the Mandometer, a plate scale, along with video recordings of subjects during the course of single meals, has been used to assist clinicians in measuring relevant food intake parameters. In this work, we present a novel algorithm for automatically constructing a subject\'s food intake curve using only the Mandometer weight measurements. This eliminates the need for direct clinical observation or video recordings, thus significantly reducing the manual effort required for analysis. The proposed algorithm aims at identifying specific meal related events (e.g. bites, food additions, artifacts), by applying an adaptive pre-processing stage using Delta coefficients, followed by event detection based on a parametric Probabilistic Context-Free Grammar on the derivative of the recorded sequence. Experimental results on a dataset of 114 meals from individuals suffering from obesity or eating disorders, as well as from individuals with normal BMI, demonstrate the effectiveness of the proposed approach.

@inproceedings{Papapanagiotou2015Parametric,
author={Vasileios Papapanagiotou and Christos Diou and Billy Langlet and Ioannis Ioakimidis and Anastasios Delopoulos},
title={A parametric Probabilistic Context-Free Grammar for food intake analysis based on continuous meal weight measurements},
booktitle={2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={7853-7856},
publisher={IEEE},
year={2015},
month={08},
date={2015-08-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2015parametric.pdf},
doi={http://10.1109/EMBC.2015.7320212},
abstract={Monitoring and modification of eating behaviour through continuous meal weight measurements has been successfully applied in clinical practice to treat obesity and eating disorders. For this purpose, the Mandometer, a plate scale, along with video recordings of subjects during the course of single meals, has been used to assist clinicians in measuring relevant food intake parameters. In this work, we present a novel algorithm for automatically constructing a subject\\\'s food intake curve using only the Mandometer weight measurements. This eliminates the need for direct clinical observation or video recordings, thus significantly reducing the manual effort required for analysis. The proposed algorithm aims at identifying specific meal related events (e.g. bites, food additions, artifacts), by applying an adaptive pre-processing stage using Delta coefficients, followed by event detection based on a parametric Probabilistic Context-Free Grammar on the derivative of the recorded sequence. Experimental results on a dataset of 114 meals from individuals suffering from obesity or eating disorders, as well as from individuals with normal BMI, demonstrate the effectiveness of the proposed approach.}
}

2014

(J)
A. Chrysopoulos, C. Diou, A.L. Symeonidis and P.A. Mitkas
"Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications"
Engineering Applications of Artificial Intelligence, 35, pp. 299-315, 2014 Sep
[Abstract][BibTex][pdf]

In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.

@article{Chrysopoulos2014Bottom,
author={A. Chrysopoulos and C. Diou and A.L. Symeonidis and P.A. Mitkas},
title={Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications},
journal={Engineering Applications of Artificial Intelligence},
volume={35},
pages={299-315},
year={2014},
month={09},
date={2014-09-01},
url={http://www.sciencedirect.com/science/article/pii/S0952197614001377},
doi={http://10.1016/j.engappai.2014.06.015},
abstract={In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.}
}

(J)
G. Mamalakis, C. Diou, A.L. Symeonidis and L. Georgiadis
"Of daemons and men: A file system approach towards intrusion detection"
Applied Soft Computing, 25, pp. 1-14, 2014 Dec
[Abstract][BibTex][pdf]

We present \\\\{FI2DS\\\\ a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific \\\\{BSM\\\\ audit records, as well as an \\\\{IDS\\\\ that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since \\\\{FI2DS\\\\ detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1× 10?2 % and 9.3× 10?4 %. Comparison of \\\\{FI2DS\\\\ to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed \\\\{IDS\\\\ in all three datasets. Within the context of this paper \\\\{FI2DS\\\\ is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.

@article{Mamalakis2014Daemons,
author={G. Mamalakis and C. Diou and A.L. Symeonidis and L. Georgiadis},
title={Of daemons and men: A file system approach towards intrusion detection},
journal={Applied Soft Computing},
volume={25},
pages={1-14},
year={2014},
month={12},
date={2014-12-01},
url={http://www.sciencedirect.com/science/article/pii/S1568494614004311},
doi={http://10.1016/j.asoc.2014.07.026},
abstract={We present \\\\\\\\{FI2DS\\\\\\\\ a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific \\\\\\\\{BSM\\\\\\\\ audit records, as well as an \\\\\\\\{IDS\\\\\\\\ that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since \\\\\\\\{FI2DS\\\\\\\\ detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1× 10?2 % and 9.3× 10?4 %. Comparison of \\\\\\\\{FI2DS\\\\\\\\ to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed \\\\\\\\{IDS\\\\\\\\ in all three datasets. Within the context of this paper \\\\\\\\{FI2DS\\\\\\\\ is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.}
}

2014

(C)
Christos Maramis, Christos Diou, Ioannis Ioakeimidis, Irini Lekka, Gabriela Dudnik, Monica Mars, Nikos Maglaveras, Cecilia Bergh and Anastasios Delopoulos
"SPLENDID: Preventing Obesity and Eating Disorders through Long-term Behavioural Modifications"
MOBIHEALTH 2014, ATHENES, Greece, 2014 Nov
[Abstract][BibTex]

@inproceedings{Maramis2017SPLENDID,
author={Christos Maramis and Christos Diou and Ioannis Ioakeimidis and Irini Lekka and Gabriela Dudnik and Monica Mars and Nikos Maglaveras and Cecilia Bergh and Anastasios Delopoulos},
title={SPLENDID: Preventing Obesity and Eating Disorders through Long-term Behavioural Modifications},
booktitle={MOBIHEALTH 2014},
address={ATHENES, Greece},
year={2014},
month={11},
date={2014-11-01}
}

(C)
Ioannis Sarafis, Christos Diou and Anastasios Delopoulos
"Building Robust Concept Detectors from Clickthrough Data: A Study in the MSR-Bing Dataset"
2014 9th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP), pp. 66-71, 2014 Nov
[Abstract][BibTex][pdf]

In this paper we extend our previous work on strategies for automatically constructing noise resilient SVM detectors from click through data for large scale concept-based image retrieval. First, search log data is used in conjunction with Information Retrieval (IR) models to score images with respect to each concept. The IR models evaluated in this work include Vector Space Models (VSM), BM25 and Language Models (LM). The scored images are then used to create training sets for SVM and appropriate sample weights for two SVM variants: the Fuzzy SVM (FSVM) and the Power SVM (PSVM). These SVM variants incorporate weights for each individual training sample and can therefore be used to model label uncertainty at the classifier level. Experiments on the MSR-Bing Image Retrieval Grand Challenge dataset (consisting of 1M images and 82.3M unique clicks) show that FSVM is the most robust SVM algorithm for handling label noise and that the highest performance is achieved with weights derived from VSM. These results extend our previous findings on the value of FSVM from professional image archives to large-scale general purpose search engines, and furthermore identify VSM as the most appropriate sample weighting model.

@inproceedings{Sarafis2014Building,
author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},
title={Building Robust Concept Detectors from Clickthrough Data: A Study in the MSR-Bing Dataset},
booktitle={2014 9th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP)},
pages={66-71},
year={2014},
month={11},
date={2014-11-01},
url={http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6978955},
doi={http://10.1109/SMAP.2014},
abstract={In this paper we extend our previous work on strategies for automatically constructing noise resilient SVM detectors from click through data for large scale concept-based image retrieval. First, search log data is used in conjunction with Information Retrieval (IR) models to score images with respect to each concept. The IR models evaluated in this work include Vector Space Models (VSM), BM25 and Language Models (LM). The scored images are then used to create training sets for SVM and appropriate sample weights for two SVM variants: the Fuzzy SVM (FSVM) and the Power SVM (PSVM). These SVM variants incorporate weights for each individual training sample and can therefore be used to model label uncertainty at the classifier level. Experiments on the MSR-Bing Image Retrieval Grand Challenge dataset (consisting of 1M images and 82.3M unique clicks) show that FSVM is the most robust SVM algorithm for handling label noise and that the highest performance is achieved with weights derived from VSM. These results extend our previous findings on the value of FSVM from professional image archives to large-scale general purpose search engines, and furthermore identify VSM as the most appropriate sample weighting model.}
}

(C)
Ioannis Sarafis, Christos Diou, Theodora Tsikrika and Anastasios Delopoulos
"Weighted SVM from clickthrough data for image retrieval"
2014 IEEE International Conference on Image Processing (ICIP), pp. 3013-3017, 2014 Aug
[Abstract][BibTex][pdf]

In this paper we propose a novel approach to training noise-resilient concept detectors from clickthrough data collected by image search engines. We take advantage of the query logs to automatically produce concept detector training sets; these suffer though from label noise, i.e., erroneously assigned labels. We explore two alternative approaches for handling noisy training data at the classifier level by training concept detectors with two SVM variants: the Fuzzy SVM and the Power SVM. Experimental results on images collected from a professional image search engine indicate that 1) Fuzzy SVM outperforms both SVM and Power SVM and is the most effective approach towards handling label noise and 2) the performance gain of Fuzzy SVM compared to SVM increases progressively with the noise level in the training sets.

@inproceedings{Sarafis2014Weighted,
author={Ioannis Sarafis and Christos Diou and Theodora Tsikrika and Anastasios Delopoulos},
title={Weighted SVM from clickthrough data for image retrieval},
booktitle={2014 IEEE International Conference on Image Processing (ICIP)},
pages={3013-3017},
year={2014},
month={08},
date={2014-08-01},
url={http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=7025609},
doi={http://10.1109/ICIP.2014.7025609},
abstract={In this paper we propose a novel approach to training noise-resilient concept detectors from clickthrough data collected by image search engines. We take advantage of the query logs to automatically produce concept detector training sets; these suffer though from label noise, i.e., erroneously assigned labels. We explore two alternative approaches for handling noisy training data at the classifier level by training concept detectors with two SVM variants: the Fuzzy SVM and the Power SVM. Experimental results on images collected from a professional image search engine indicate that 1) Fuzzy SVM outperforms both SVM and Power SVM and is the most effective approach towards handling label noise and 2) the performance gain of Fuzzy SVM compared to SVM increases progressively with the noise level in the training sets.}
}

(C)
Theodora Tsikrika and Christos Diou
"Multi-evidence User Group Discovery in Professional Image Search"
Advances in Information Retrieval: 36th European Conference on IR Research, ECIR 2014, Amsterdam, The Netherlands, April 13-16, 2014., pp. 693-699, Springer International Publishing, Cham, 2014 Apr
[Abstract][BibTex][pdf]

This work evaluates the combination of multiple evidence for discovering groups of users with similar interests. User groups are created by analysing the search logs recorded for a sample of 149 users of a professional image search engine in conjunction with the textual and visual features of the clicked images, and evaluated by exploiting their topical classification. The results indicate that the discovered user groups are meaningful and that combining textual and visual features improves the homogeneity of the user groups compared to each individual feature.

@inproceedings{Tsikrika2014Multi,
author={Theodora Tsikrika and Christos Diou},
title={Multi-evidence User Group Discovery in Professional Image Search},
booktitle={Advances in Information Retrieval: 36th European Conference on IR Research, ECIR 2014, Amsterdam, The Netherlands, April 13-16, 2014.},
pages={693-699},
publisher={Springer International Publishing},
address={Cham},
year={2014},
month={04},
date={2014-04-13},
url={http://dx.doi.org/10.1007/978-3-319-06028-6_78},
doi={http://10.1007/978-3-319-06028-6_78},
abstract={This work evaluates the combination of multiple evidence for discovering groups of users with similar interests. User groups are created by analysing the search logs recorded for a sample of 149 users of a professional image search engine in conjunction with the textual and visual features of the clicked images, and evaluated by exploiting their topical classification. The results indicate that the discovered user groups are meaningful and that combining textual and visual features improves the homogeneity of the user groups compared to each individual feature.}
}

2013

(J)
Christos Maramis, Manolis Falelakis, Irini Lekka, Christos Diou, Pericles Mitkas and Anastasios Delopoulos
"Applying semantic technologies in cervical cancer research"
Data & Knowledge Engineering, 86, pp. 160-178, 2013 Jul
[Abstract][BibTex][pdf]

In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named \\{ASSIST\\ and developed as an \\{EU\\ research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case–control association studies, which is the ultimate goal of the system.

@article{Maramis2013Applying,
author={Christos Maramis and Manolis Falelakis and Irini Lekka and Christos Diou and Pericles Mitkas and Anastasios Delopoulos},
title={Applying semantic technologies in cervical cancer research},
journal={Data & Knowledge Engineering},
volume={86},
pages={160-178},
year={2013},
month={07},
date={2013-07-01},
url={http://www.sciencedirect.com/science/article/pii/S0169023X13000220},
doi={http://10.1016/j.datak.2013.02.003},
abstract={In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named \\\\{ASSIST\\\\ and developed as an \\\\{EU\\\\ research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case–control association studies, which is the ultimate goal of the system.}
}

2013

(C)
Antonios Chrysopoulos, Christos Diou, Andreas L. Symeonidis and Pericles A. Mitkas
"Agent-Based Small-Scale Energy Consumer Models for Energy Portfolio Management"
International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2013 IEEE/WIC/ACM, pp. 94-101, IEEE, 2013 Nov
[Abstract][BibTex][pdf]

In contemporary power systems, residential consumers may account for up to 50% of a country\'s total electrical energy consumption. Even though they constitute a significant portion of the energy market, not much has been achieved towards eliminating the inability for energy suppliers to perform long-term portfolio management, thus maximizing their revenue. The root cause of these problems is the difficulty in modeling consumers\' behavior, based on their everyday activities and personal comfort. If one were able to provide targeted incentives based on consumer profiles, the expected impact and market benefits would be significant. This paper introduces a formal residential consumer modeling methodology, that allows (i) the decomposition of the observed electrical load curves into consumer activities and, (ii) the evaluation of the impact of behavioral changes on the household\'s aggregate load curve. Analyzing electrical consumption measurements from DEHEMS research project enabled the model extraction of real-life consumers. Experiments indicate that the proposed methodology produces accurate small-scale consumer models and verify that small shifts in appliance usage times are sufficient to achieve significant peak power reduction.

@inproceedings{Chrysopoulos2013Agent,
author={Antonios Chrysopoulos and Christos Diou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Agent-Based Small-Scale Energy Consumer Models for Energy Portfolio Management},
booktitle={International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2013 IEEE/WIC/ACM},
pages={94-101},
publisher={IEEE},
year={2013},
month={11},
date={2013-11-17},
url={http://dx.doi.org/10.1109/WI-IAT.2013.96},
doi={http://10.1109/WI-IAT.2013.96},
abstract={In contemporary power systems, residential consumers may account for up to 50% of a country\\'s total electrical energy consumption. Even though they constitute a significant portion of the energy market, not much has been achieved towards eliminating the inability for energy suppliers to perform long-term portfolio management, thus maximizing their revenue. The root cause of these problems is the difficulty in modeling consumers\\' behavior, based on their everyday activities and personal comfort. If one were able to provide targeted incentives based on consumer profiles, the expected impact and market benefits would be significant. This paper introduces a formal residential consumer modeling methodology, that allows (i) the decomposition of the observed electrical load curves into consumer activities and, (ii) the evaluation of the impact of behavioral changes on the household\\'s aggregate load curve. Analyzing electrical consumption measurements from DEHEMS research project enabled the model extraction of real-life consumers. Experiments indicate that the proposed methodology produces accurate small-scale consumer models and verify that small shifts in appliance usage times are sufficient to achieve significant peak power reduction.}
}

2012

(C)
Georgios T. Andreou, Andreas L. Symeonidis, Christos Diou, Pericles A. Mitkas and Dimitrios P. Labridis
"A framework for the implementation of large scale Demand Response"
2012 International Conference on Smart Grid Technology, Economics and Policies (SG-TEP), pp. 1-4, IEEE, 2012 Dec
[Abstract][BibTex][pdf]

The rationalization of electrical energy consumption is a constant goal driving research over the last decades. The pursuit of efficient solutions requires the involvement of electrical energy consumers through Demand Response programs. In this study, a framework is presented that can serve as a tool for designing and simulating Demand Response programs, aiming at energy efficiency through consumer behavioral change. It provides the capability to dynamically model groups of electrical energy consumers with respect to their consumption, as well as their behavior. This framework is currently under development within the scope of the EU funded FP7 project “CASSANDRA - A multivariate platform for assessing the impact of strategic decisions in electrical power systems”.

@inproceedings{Andreou2012Framework,
author={Georgios T. Andreou and Andreas L. Symeonidis and Christos Diou and Pericles A. Mitkas and Dimitrios P. Labridis},
title={A framework for the implementation of large scale Demand Response},
booktitle={2012 International Conference on Smart Grid Technology, Economics and Policies (SG-TEP)},
pages={1-4},
publisher={IEEE},
year={2012},
month={12},
date={2012-12-03},
url={http://dx.doi.org/10.1109/SG-TEP.2012.6642380},
doi={http://10.1109/SG-TEP.2012.6642380},
abstract={The rationalization of electrical energy consumption is a constant goal driving research over the last decades. The pursuit of efficient solutions requires the involvement of electrical energy consumers through Demand Response programs. In this study, a framework is presented that can serve as a tool for designing and simulating Demand Response programs, aiming at energy efficiency through consumer behavioral change. It provides the capability to dynamically model groups of electrical energy consumers with respect to their consumption, as well as their behavior. This framework is currently under development within the scope of the EU funded FP7 project “CASSANDRA - A multivariate platform for assessing the impact of strategic decisions in electrical power systems”.}
}

2010

(J)
Christos Diou, George Stephanopoulos, Panagiotis Panagiotopoulos, Christos Papachristou, Nikos Dimitriou and Anastasios Delopoulos
"Large-Scale Concept Detection in Multimedia Data Using Small Training Sets and Cross-Domain Concept Fusion"
IEEE Transactions on Circuits and Systems for Video Technology, 20, (12), pp. 1808 - 1821, 2010 Oct
[Abstract][BibTex][pdf]

This paper presents the concept detector module developed for the VITALAS multimedia retrieval system. It outlines its architecture and major implementation aspects, including a set of procedures and tools that were used for the development of detectors for more than 500 concepts. The focus is on aspects that increase the system\'s scalability in terms of the number of concepts: collaborative concept definition and disambiguation, selection of small but sufficient training sets and efficient manual annotation. The proposed architecture uses cross-domain concept fusion to improve effectiveness and reduce the number of samples required for concept detector training. Two criteria are proposed for selecting the best predictors to use for fusion and their effectiveness is experimentally evaluated for 221 concepts on the TRECVID-2005 development set and 132 concepts on a set of images provided by the Belga news agency. In these experiments, cross-domain concept fusion performed better than early fusion for most concepts. Experiments with variable training set sizes also indicate that cross-domain concept fusion is more effective than early fusion when the training set size is small.

@article{Diou2011Large,
author={Christos Diou and George Stephanopoulos and Panagiotis Panagiotopoulos and Christos Papachristou and Nikos Dimitriou and Anastasios Delopoulos},
title={Large-Scale Concept Detection in Multimedia Data Using Small Training Sets and Cross-Domain Concept Fusion},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
volume={20},
number={12},
pages={1808 - 1821},
year={2010},
month={10},
date={2010-10-18},
url={http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5604666},
doi={http://10.1109/TCSVT.2010.2087814},
abstract={This paper presents the concept detector module developed for the VITALAS multimedia retrieval system. It outlines its architecture and major implementation aspects, including a set of procedures and tools that were used for the development of detectors for more than 500 concepts. The focus is on aspects that increase the system\\'s scalability in terms of the number of concepts: collaborative concept definition and disambiguation, selection of small but sufficient training sets and efficient manual annotation. The proposed architecture uses cross-domain concept fusion to improve effectiveness and reduce the number of samples required for concept detector training. Two criteria are proposed for selecting the best predictors to use for fusion and their effectiveness is experimentally evaluated for 221 concepts on the TRECVID-2005 development set and 132 concepts on a set of images provided by the Belga news agency. In these experiments, cross-domain concept fusion performed better than early fusion for most concepts. Experiments with variable training set sizes also indicate that cross-domain concept fusion is more effective than early fusion when the training set size is small.}
}

(J)
Theodora Tsikrika, Christos Diou, Arjen P. de Vries and Anastasios Delopoulos
"Reliability and effectiveness of clickthrough data for automatic image annotation"
Multimedia Tools and Applications, 55, (1), pp. 27-52, 2010 Aug
[Abstract][BibTex][pdf]

Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manual annotation effort. We investigate and evaluate this approach using a collection of 97,628 photographic images. The results indicate that the contribution of search log based training data is positive despite their inherent noise; in particular, the combination of manual and automatically generated training data outperforms the use of manual data alone. It is therefore possible to use clickthrough data to perform large-scale image annotation with little manual annotation effort or, depending on performance, using only the automatically generated training data. An extensive presentation of the experimental results and the accompanying data can be accessed at http://olympus.ee.auth.gr/{\\textasciitildediou/civr2009/.

@article{Tsikrika2011Reliability,
author={Theodora Tsikrika and Christos Diou and Arjen P. de Vries and Anastasios Delopoulos},
title={Reliability and effectiveness of clickthrough data for automatic image annotation},
journal={Multimedia Tools and Applications},
volume={55},
number={1},
pages={27-52},
year={2010},
month={08},
date={2010-08-17},
url={http://dx.doi.org/10.1007/s11042-010-0584-1},
doi={http://10.1007/s11042-010-0584-1},
abstract={Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manual annotation effort. We investigate and evaluate this approach using a collection of 97,628 photographic images. The results indicate that the contribution of search log based training data is positive despite their inherent noise; in particular, the combination of manual and automatically generated training data outperforms the use of manual data alone. It is therefore possible to use clickthrough data to perform large-scale image annotation with little manual annotation effort or, depending on performance, using only the automatically generated training data. An extensive presentation of the experimental results and the accompanying data can be accessed at http://olympus.ee.auth.gr/{\\\\textasciitildediou/civr2009/.}
}

2010

(C)
Christos Diou, George Stephanopoulos and Anastasios Delopoulos
"The Multimedia Understanding Group at TRECVID-2010"
Proceedings of the TRECVID 2010 Workshop, 2010 Jan
[Abstract][BibTex][pdf]

This is a report of the Multimedia Understanding Group participation in TRECVID-2010, where we submitted full runs for the Semantic Indexing (SIN) task. Our submission aims at experimentally evaluating three research items, that are important for work that is currently in progress. First, we examine the use of bag-of-words audio features for video concept detection, with noisy and/or low-quality video data. Although audio is important for some concepts and has shown promising results at other datasets, the results indicate that it can also lead to a decrease in performance when the quality is low and the negative examples are not adequately represented. We also explore the possibility of using a cross-domain concept fusion approach for reducing the number of dimensions at the final classifier. The corresponding experiments show, however, that when drastically reducing the number of dimensions the effectiveness drops. Finally, we also examined a transformation of the feature space, using a set of functions that are parametrically constructed from the data.

@inproceedings{Diou2010Multimedia,
author={Christos Diou and George Stephanopoulos and Anastasios Delopoulos},
title={The Multimedia Understanding Group at TRECVID-2010},
booktitle={Proceedings of the TRECVID 2010 Workshop},
year={2010},
month={01},
date={2010-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/mug-auth.pdf},
abstract={This is a report of the Multimedia Understanding Group participation in TRECVID-2010, where we submitted full runs for the Semantic Indexing (SIN) task. Our submission aims at experimentally evaluating three research items, that are important for work that is currently in progress. First, we examine the use of bag-of-words audio features for video concept detection, with noisy and/or low-quality video data. Although audio is important for some concepts and has shown promising results at other datasets, the results indicate that it can also lead to a decrease in performance when the quality is low and the negative examples are not adequately represented. We also explore the possibility of using a cross-domain concept fusion approach for reducing the number of dimensions at the final classifier. The corresponding experiments show, however, that when drastically reducing the number of dimensions the effectiveness drops. Finally, we also examined a transformation of the feature space, using a set of functions that are parametrically constructed from the data.}
}

2009

(C)
Christos Diou, George Stephanopoulos, Nikos Dimitriou, Panagiotis Panagiotopoulos, Christos Papachristou, Anastasios Delopoulos, Henning Rode, Theodora Tsikrika, Arjen P. de Vries, Daniel Schneider, Jochen Schwenninger, Marie-Luce Viaud, Agnès Saulnier, Peter Altendorf, Birgit Schröter, Matthias Elser, Angel Rego, Alex Rodriguez, Cristina Martínez, Iñaki Etxaniz, Gérard Dupont, Bruno Grilhères, Nicolas Martin, Nozha Boujemaa, Alexis Joly, Raffi Enficiaud and Anne Verroust
"VITALAS at TRECVID 2009"
2009 TREC Video Retrieval Evaluation Workshop TRECVID-2009, 2009 Jan
[Abstract][BibTex][pdf]

This paper describes the participation of VITALAS in the TRECVID-2009 evaluation where we submitted runs for the High-Level Feature Extraction (HLFE) and Interactive Search tasks.For the HLFE task, we focus on the evaluation of low-level feature sets and fusion methods. The runs employ multiple low-level features based on all available modalities (visual,audio and text) and the results show that use of such features improves the retrieval effectiveness significantly. We also use a concept score fusion approach that achieves good results with reduced low-level feature vector dimensionality. Furthermore, a weighting scheme is introduced for cluster assignment in the “bag-of-words” approach. Our runs achieved good performance compared to a baseline run and the submissions of other TRECVID-2009 participants. For the Interactive Search task, we focus on the evaluation of the integrated VITALAS system in order to gain insights into the use and effectiveness of the system’s search functionalities on (the combination of) multiple modalities and study the behavior of two user groups: professional archivists and non-professional users. Our analysis indicates that both user groups submit about the same total number of queries and use the search functionalities in a similar way, but professional users save twice as many shots and examine shots deeper in the ranked retrieved list.The agreement between the TRECVID assessors and our users was quite low. In terms of the effectiveness of the different search modalities, similarity searches retrieve on average twice as many relevant shots as keyword searches, fused searches three times as many, while concept searches retrieve even up to five times as many relevant shots, indicating the benefits of the use of robust concept detectors in multimodal video retrieval.

@inproceedings{Diou2009VITALAS,
author={Christos Diou and George Stephanopoulos and Nikos Dimitriou and Panagiotis Panagiotopoulos and Christos Papachristou and Anastasios Delopoulos and Henning Rode and Theodora Tsikrika and Arjen P. de Vries and Daniel Schneider and Jochen Schwenninger and Marie-Luce Viaud and Agnès Saulnier and Peter Altendorf and Birgit Schröter and Matthias Elser and Angel Rego and Alex Rodriguez and Cristina Martínez and Iñaki Etxaniz and Gérard Dupont and Bruno Grilhères and Nicolas Martin and Nozha Boujemaa and Alexis Joly and Raffi Enficiaud and Anne Verroust},
title={VITALAS at TRECVID 2009},
booktitle={2009 TREC Video Retrieval Evaluation Workshop TRECVID-2009},
year={2009},
month={01},
date={2009-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/vitalas09.pdf},
abstract={This paper describes the participation of VITALAS in the TRECVID-2009 evaluation where we submitted runs for the High-Level Feature Extraction (HLFE) and Interactive Search tasks.For the HLFE task, we focus on the evaluation of low-level feature sets and fusion methods. The runs employ multiple low-level features based on all available modalities (visual,audio and text) and the results show that use of such features improves the retrieval effectiveness significantly. We also use a concept score fusion approach that achieves good results with reduced low-level feature vector dimensionality. Furthermore, a weighting scheme is introduced for cluster assignment in the “bag-of-words” approach. Our runs achieved good performance compared to a baseline run and the submissions of other TRECVID-2009 participants. For the Interactive Search task, we focus on the evaluation of the integrated VITALAS system in order to gain insights into the use and effectiveness of the system’s search functionalities on (the combination of) multiple modalities and study the behavior of two user groups: professional archivists and non-professional users. Our analysis indicates that both user groups submit about the same total number of queries and use the search functionalities in a similar way, but professional users save twice as many shots and examine shots deeper in the ranked retrieved list.The agreement between the TRECVID assessors and our users was quite low. In terms of the effectiveness of the different search modalities, similarity searches retrieve on average twice as many relevant shots as keyword searches, fused searches three times as many, while concept searches retrieve even up to five times as many relevant shots, indicating the benefits of the use of robust concept detectors in multimodal video retrieval.}
}

(C)
Theodora Tsikrika, Christos Diou, Arjen P. de Vries and Anastasios Delopoulos
"Are clickthrough data reliable as image annotations?"
Proceedings of the Theseus/ImageCLEF workshop on visual information retrieval evaluation, 2009 Sep
[Abstract][BibTex][pdf]

We examine the reliability of clickthrough data as concept-based image annotations, by comparing them against manual annotations, for different concept categories. Our analysis shows that, for many concepts, the image annotations generated by using clickthrough data are reliable, with up to 90% of true positives in the automatically annotated images compared to the manual ground truth. Concept categories, though, do not provide additional evidence about the types of concepts for which clickthrough- based image annotation performs well.

@inproceedings{Tsikrika2009Clickthrough,
author={Theodora Tsikrika and Christos Diou and Arjen P. de Vries and Anastasios Delopoulos},
title={Are clickthrough data reliable as image annotations?},
booktitle={Proceedings of the Theseus/ImageCLEF workshop on visual information retrieval evaluation},
year={2009},
month={09},
date={2009-09-29},
url={http://mug.ee.auth.gr/wp-content/uploads/10.1.1.154.9693.pdf},
abstract={We examine the reliability of clickthrough data as concept-based image annotations, by comparing them against manual annotations, for different concept categories. Our analysis shows that, for many concepts, the image annotations generated by using clickthrough data are reliable, with up to 90% of true positives in the automatically annotated images compared to the manual ground truth. Concept categories, though, do not provide additional evidence about the types of concepts for which clickthrough- based image annotation performs well.}
}

(C)
Theodora Tsikrika, Christos Diou, Arjen P. de Vries and Anastasios Delopoulos
"Image Annotation Using Clickthrough Data"
Proceedings of the ACM International Conference on Image and Video Retrieval, ACM, New York, NY, USA, 2009 Jan
[Abstract][BibTex][pdf]

Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manual annotation effort. We investigate and evaluate this approach using a collection of 97,628 photographic images. The results indicate that the contribution of search log based training data is positive; in particular, the combination of manual and automatically generated training data outperforms the use of manual data alone. It is therefore possible to use clickthrough data to perform large-scale image annotation with little manual annotation effort or, depending on performance, using only the automatically generated training data. The datasets used as well as an extensive presentation of the experimental results can be accessed at http://olympus.ee.auth.gr/~diou/civr2009/.

@inproceedings{Tsikrika2009Image,
author={Theodora Tsikrika and Christos Diou and Arjen P. de Vries and Anastasios Delopoulos},
title={Image Annotation Using Clickthrough Data},
booktitle={Proceedings of the ACM International Conference on Image and Video Retrieval},
publisher={ACM},
address={New York, NY, USA},
year={2009},
month={01},
date={2009-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/a14-tsikrika.pdf},
doi={http://10.1145/1646396.1646415},
abstract={Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manual annotation effort. We investigate and evaluate this approach using a collection of 97,628 photographic images. The results indicate that the contribution of search log based training data is positive; in particular, the combination of manual and automatically generated training data outperforms the use of manual data alone. It is therefore possible to use clickthrough data to perform large-scale image annotation with little manual annotation effort or, depending on performance, using only the automatically generated training data. The datasets used as well as an extensive presentation of the experimental results can be accessed at http://olympus.ee.auth.gr/~diou/civr2009/.}
}

2008

(J)
Manolis Falelakis, Christos Diou and Anastasios Delopoulos
"Complexity control in semantic identification"
International Journal of Intelligent Systems Technologies and Applications, 1, (3/4), pp. 247-262, 2008 Jan
[Abstract][BibTex][pdf]

This work introduces an efficient scheme for identifying semantic entities within multimedia data sets, providing mechanisms for modelling the trade-off between the accuracy of the result and the entailed computational cost. Semantic entities are described through formal definitions based on lower-level semantic and/or syntactic features. Based on appropriate metrics, the paper presents a methodology for selecting optimal subsets of syntactic features to extract, so that satisfactory results are obtained, while complexity remains below some required limit.

@article{Falelakis2006Complexity,
author={Manolis Falelakis and Christos Diou and Anastasios Delopoulos},
title={Complexity control in semantic identification},
journal={International Journal of Intelligent Systems Technologies and Applications},
volume={1},
number={3/4},
pages={247-262},
year={2008},
month={01},
date={2008-01-04},
url={http://dx.doi.org/10.1504/IJISTA.2006.009907},
doi={http://10.1504/IJISTA.2006.009907},
abstract={This work introduces an efficient scheme for identifying semantic entities within multimedia data sets, providing mechanisms for modelling the trade-off between the accuracy of the result and the entailed computational cost. Semantic entities are described through formal definitions based on lower-level semantic and/or syntactic features. Based on appropriate metrics, the paper presents a methodology for selecting optimal subsets of syntactic features to extract, so that satisfactory results are obtained, while complexity remains below some required limit.}
}

2008

(C)
Christos Diou, Christos Papachristou, Panagiotis Panagiotopoulos, George Stephanopoulos, Nikos Dimitriou, Anastasios Delopoulos, Henning Rode, Robin Aly, Arjen P. de Vries and Theodora Tsikrika
"VITALAS at TRECVID-2008"
6th TREC Video Retrieval Evaluation Workshop TRECVID08, Gaithersburg, USA, 2008 Jan
[Abstract][BibTex][pdf]

This is the ?rst participation of VITALAS in TRECVID. In the high level feature extraction task, our submitted runs are based mainly on visual features, while one run utilizes audio information as well; the text is not used. The experiments performed aim at evaluating the e?ectiveness of different approaches to input processing prior to the ?nal classi?cation (i.e., ranking) stage. These are (i) clustering of feature vectors within the feature space, (ii) fusion of classi?er output scores for other concepts and (iii) feature selection. The results indicate that (i) fusion of the classi?er output of other concepts can provide valuable information, even if the original features are not discriminative, (ii) feature selection generally improves the results (especially when the original number of dimensions is high) and (iii) clustering within the feature space with small number of clusters does not seem to provide any signi?cant additional information. Our experiments for the search task are focused on concept retrieval. We generate an arti?cial text collection by merging context descriptions according to the probability of each concept to occur in a given shot. To make the approach feasible, we further need to investigate techniques for pruning the dense shot concept matrix. Despite the poor overall retrieval quality, our concept search runs show a similar performance to the pure ASR run. Only the combination of ASR and concept search yields considerable improvements. Among the tested concept pruning strategies, the simple top k selection works better than the deviationbased thresholding.

@inproceedings{Diou2008VITALAS,
author={Christos Diou and Christos Papachristou and Panagiotis Panagiotopoulos and George Stephanopoulos and Nikos Dimitriou and Anastasios Delopoulos and Henning Rode and Robin Aly and Arjen P. de Vries and Theodora Tsikrika},
title={VITALAS at TRECVID-2008},
booktitle={6th TREC Video Retrieval Evaluation Workshop TRECVID08},
address={Gaithersburg, USA},
year={2008},
month={01},
date={2008-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/vitalas.pdf},
abstract={This is the ?rst participation of VITALAS in TRECVID. In the high level feature extraction task, our submitted runs are based mainly on visual features, while one run utilizes audio information as well; the text is not used. The experiments performed aim at evaluating the e?ectiveness of different approaches to input processing prior to the ?nal classi?cation (i.e., ranking) stage. These are (i) clustering of feature vectors within the feature space, (ii) fusion of classi?er output scores for other concepts and (iii) feature selection. The results indicate that (i) fusion of the classi?er output of other concepts can provide valuable information, even if the original features are not discriminative, (ii) feature selection generally improves the results (especially when the original number of dimensions is high) and (iii) clustering within the feature space with small number of clusters does not seem to provide any signi?cant additional information. Our experiments for the search task are focused on concept retrieval. We generate an arti?cial text collection by merging context descriptions according to the probability of each concept to occur in a given shot. To make the approach feasible, we further need to investigate techniques for pruning the dense shot concept matrix. Despite the poor overall retrieval quality, our concept search runs show a similar performance to the pure ASR run. Only the combination of ASR and concept search yields considerable improvements. Among the tested concept pruning strategies, the simple top k selection works better than the deviationbased thresholding.}
}

(C)
Pericles A. Mitkas, Vassilis Koutkias, Andreas L. Symeonidis, Manolis Falelakis, Christos Diou, Irini Lekka, Anastasios Delopoulos, Theodoros Agorastos and Nicos Maglaveras
"Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach"
Proceedings of the International Congress of the European Federation for Medical Informatics (MIE08), Goteborg, Sweden, 2008 May
[Abstract][BibTex]

Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.

@inproceedings{Mitkas2008Association,
author={Pericles A. Mitkas and Vassilis Koutkias and Andreas L. Symeonidis and Manolis Falelakis and Christos Diou and Irini Lekka and Anastasios Delopoulos and Theodoros Agorastos and Nicos Maglaveras},
title={Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach},
booktitle={Proceedings of the International Congress of the European Federation for Medical Informatics (MIE08)},
address={Goteborg, Sweden},
year={2008},
month={05},
date={2008-05-25},
abstract={Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.}
}

2008

(I)
Christos Diou, Nikos Batalas and Anastasios Delopoulos
"Advances in Semantic Media Adaptation and Personalization"
Charpter:Indexing and Browsing of Color Images: Design Considerations, 29, pp. 329-346, Springer, Berlin, Heidelberg, 2008 Jan
[Abstract][BibTex][pdf]

This chapter deals with the various problems and decisions associated with the design of a content based image retrieval system. Image descriptors and descriptor similarity measures, indexing data structures and navigation approaches are examined through the evaluation of a set representative methods. Insight is provided regarding their e?ciency and applicability. Furthermore the accuracy of using low dimensional FastMap point con?gurations for indexing is extensively evaluated through a set of experiments. While it is out of the scope of this chapter to o?er a review of state of the art techniques in the problems above, the results presented aim at assisting in the design and development of practical, usable and possibly large scale image databases.

@inbook{Diou2008Indexing,
author={Christos Diou and Nikos Batalas and Anastasios Delopoulos},
title={Advances in Semantic Media Adaptation and Personalization},
chapter={Indexing and Browsing of Color Images: Design Considerations},
volume={29},
pages={329-346},
publisher={Springer},
address={Berlin, Heidelberg},
year={2008},
month={01},
date={2008-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/10.1007_978-3-540-76361_16.pdf},
doi={http://10.1007/978-3-540-76361_16},
abstract={This chapter deals with the various problems and decisions associated with the design of a content based image retrieval system. Image descriptors and descriptor similarity measures, indexing data structures and navigation approaches are examined through the evaluation of a set representative methods. Insight is provided regarding their e?ciency and applicability. Furthermore the accuracy of using low dimensional FastMap point con?gurations for indexing is extensively evaluated through a set of experiments. While it is out of the scope of this chapter to o?er a review of state of the art techniques in the problems above, the results presented aim at assisting in the design and development of practical, usable and possibly large scale image databases.}
}

2006

(J)
Manolis Falelakis, Christos Diou and Anastasios Delopoulos
"Semantic identification: balancing between complexity and validity"
EURASIP Journal on Applied Signal Processing, pp. 183-183, 2006 Jan
[Abstract][BibTex][pdf]

An efficient scheme for identifying semantic entities within data sets such as multimedia documents, scenes, signals, and so forth, is proposed in this work. Expression of semantic entities in terms of syntactic properties is modelled with appropriately defined finite automata, which also model the identification procedure. Based on the structure and properties of these automata, formal definitions of attained validity and certainty and also required complexity are defined as metrics of identification efficiency. The main contribution of the paper relies on organizing the identification and search procedure in a way that maximizes its validity for bounded complexity budgets and reversely minimizes computational complexity for a given required validity threshold. The associated optimization problem is solved by using dynamic programming. Finally, a set of experiments provides insight to the introduced theoretical framework.

@article{Falelakis2006Semantic,
author={Manolis Falelakis and Christos Diou and Anastasios Delopoulos},
title={Semantic identification: balancing between complexity and validity},
journal={EURASIP Journal on Applied Signal Processing},
pages={183-183},
year={2006},
month={01},
date={2006-01-01},
url={http://dx.doi.org/10.1155/ASP/2006/41716},
doi={http://10.1155/ASP/2006/41716},
abstract={An efficient scheme for identifying semantic entities within data sets such as multimedia documents, scenes, signals, and so forth, is proposed in this work. Expression of semantic entities in terms of syntactic properties is modelled with appropriately defined finite automata, which also model the identification procedure. Based on the structure and properties of these automata, formal definitions of attained validity and certainty and also required complexity are defined as metrics of identification efficiency. The main contribution of the paper relies on organizing the identification and search procedure in a way that maximizes its validity for bounded complexity budgets and reversely minimizes computational complexity for a given required validity threshold. The associated optimization problem is solved by using dynamic programming. Finally, a set of experiments provides insight to the introduced theoretical framework.}
}

2006

(C)
Nikos Batalas, Christos Diou and Anastasios Delopoulos
"Efficient Indexing, Color Descriptors and Browsing in Image Databases"
1st International Workshop on Semantic Media Adaptation and Personalization (SMAP06), pp. 129-134, Athens, Greece, 2006 Jan
[Abstract][BibTex][pdf]

This work provides an experimental evaluation of various existing approaches for some of the major problems content based image retrieval applications are faced with. More specifically, global color representation, indexing and navigation methods are analyzed and insight is provided regarding their efficiency and applicability. Furthermore this paper proposes and evaluates the combined use of FastMap and kd-trees to enable accurate and fast retrieval in image databases.

@inproceedings{Batalas2006Efficient,
author={Nikos Batalas and Christos Diou and Anastasios Delopoulos},
title={Efficient Indexing, Color Descriptors and Browsing in Image Databases},
booktitle={1st International Workshop on Semantic Media Adaptation and Personalization (SMAP06)},
pages={129-134},
address={Athens, Greece},
year={2006},
month={01},
date={2006-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/smap06_01.pdf},
abstract={This work provides an experimental evaluation of various existing approaches for some of the major problems content based image retrieval applications are faced with. More specifically, global color representation, indexing and navigation methods are analyzed and insight is provided regarding their efficiency and applicability. Furthermore this paper proposes and evaluates the combined use of FastMap and kd-trees to enable accurate and fast retrieval in image databases.}
}

(C)
Christos Diou, Giorgos Katsikatsos and Anastasios Delopoulos
"Constructing Fuzzy Relations from WordNet for Word Sense Disambiguation"
Semantic Media Adaptation and Personalization, pp. 135-140, Athens, Greece, 2006 Dec
[Abstract][BibTex][pdf]

In this work, the problem of word sense disambiguation is formulated as a problem of imprecise associations between words and word senses in a textual context. The approach has two main parts. Initially, we consider that for each sense, a fuzzy set is given that provides the degrees of association between a number of words and the sense. An algorithm is provided that ranks the senses of a word in a text based on this information, effectively leading to word sense disambiguation. In the second part, a method based on WordNet is developed that constructs the fuzzy sets for the senses (independent of any text). Algorithms are provided that can help in both understanding and implementation of the proposed approach. Experimental results are satisfactory and show that modeling word sense disambiguation as a problem of imprecise associations is promising

@inproceedings{Diou2006Constructing,
author={Christos Diou and Giorgos Katsikatsos and Anastasios Delopoulos},
title={Constructing Fuzzy Relations from WordNet for Word Sense Disambiguation},
booktitle={Semantic Media Adaptation and Personalization},
pages={135-140},
address={Athens, Greece},
year={2006},
month={12},
date={2006-12-01},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/04041972.pdf},
doi={http://10.1109/SMAP.2006.14},
abstract={In this work, the problem of word sense disambiguation is formulated as a problem of imprecise associations between words and word senses in a textual context. The approach has two main parts. Initially, we consider that for each sense, a fuzzy set is given that provides the degrees of association between a number of words and the sense. An algorithm is provided that ranks the senses of a word in a text based on this information, effectively leading to word sense disambiguation. In the second part, a method based on WordNet is developed that constructs the fuzzy sets for the senses (independent of any text). Algorithms are provided that can help in both understanding and implementation of the proposed approach. Experimental results are satisfactory and show that modeling word sense disambiguation as a problem of imprecise associations is promising}
}

(C)
Christos Diou, Anastasia Manta and Anastasios Delopoulos
"Space-time tubes and motion representation"
Proceedings of the 3rd IFIP Conference on Artificial Intelligence Applications and Innovations (AIAI), Athens, Greece, 2006 Jan
[Abstract][BibTex][pdf]

Space-time tubes, a feature that can be used for analysis of motion based on the observed moving points in a scene is introduced. Information provided by sensors is used to detect moving points and based on their connectivity, tubes enable a structured approach towards identifying moving objects and high level events. It is shown that using tubes in conjunction with domain knowledge can overcome errors caused by the inaccuracy or inadequacy of the original motion information. The detected high level events can then be mapped to small natural language descriptions of object motion in the scene.

@inproceedings{Diou2006Space,
author={Christos Diou and Anastasia Manta and Anastasios Delopoulos},
title={Space-time tubes and motion representation},
booktitle={Proceedings of the 3rd IFIP Conference on Artificial Intelligence Applications and Innovations (AIAI)},
address={Athens, Greece},
year={2006},
month={01},
date={2006-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/10.1007_0-387-34224-9_68.pdf},
abstract={Space-time tubes, a feature that can be used for analysis of motion based on the observed moving points in a scene is introduced. Information provided by sensors is used to detect moving points and based on their connectivity, tubes enable a structured approach towards identifying moving objects and high level events. It is shown that using tubes in conjunction with domain knowledge can overcome errors caused by the inaccuracy or inadequacy of the original motion information. The detected high level events can then be mapped to small natural language descriptions of object motion in the scene.}
}

2005

(C)
Christos Diou, Manolis Falelakis and Anastasios Delopoulos
"Knowledge Based Unification of Medical Archives"
International Networking Conference (INC2005), Samos, Greece, 2005 Jan
[Abstract][BibTex]

@inproceedings{Diou2005Knowledge,
author={Christos Diou and Manolis Falelakis and Anastasios Delopoulos},
title={Knowledge Based Unification of Medical Archives},
booktitle={International Networking Conference (INC2005)},
address={Samos, Greece},
year={2005},
month={01},
date={2005-01-01}
}

(C)
Manolis Falelakis, Christos Diou, Anastasios Valsamidis and Anastasios Delopoulos
"Complexity Control in Semantic Identification"
IEEE International Conference on Fuzzy Systems (FUZZ-IEEE05), pp. 102-107, Reno, Nevada, USA, 2005 Jan
[Abstract][BibTex][pdf]

This paper proposes a methodology for modeling the process of semantic identification and controlling its complexity and accuracy of the results. Each semantic entity is defined in terms of lower level semantic entities and low level features that can be automatically extracted, while different membership degrees are assigned to each one of the entities participating in a definition, depending on their importance for the identification. By selecting only a subset of the features that are used to define a semantic entity both complexity and accuracy of the results are reduced. It is possible, however, to design the identification using the metrics introduced, so that satisfactory results are obtained, while complexity remains below some required limit.

@inproceedings{Falelakis2005Complexity,
author={Manolis Falelakis and Christos Diou and Anastasios Valsamidis and Anastasios Delopoulos},
title={Complexity Control in Semantic Identification},
booktitle={IEEE International Conference on Fuzzy Systems (FUZZ-IEEE05)},
pages={102-107},
address={Reno, Nevada, USA},
year={2005},
month={01},
date={2005-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/Falelakis-Diou-Valsamidis-Delopoulos_FUZZIEEE05_paper1.pdf},
doi={http://10.1504/IJISTA.2006.009907},
abstract={This paper proposes a methodology for modeling the process of semantic identification and controlling its complexity and accuracy of the results. Each semantic entity is defined in terms of lower level semantic entities and low level features that can be automatically extracted, while different membership degrees are assigned to each one of the entities participating in a definition, depending on their importance for the identification. By selecting only a subset of the features that are used to define a semantic entity both complexity and accuracy of the results are reduced. It is possible, however, to design the identification using the metrics introduced, so that satisfactory results are obtained, while complexity remains below some required limit.}
}

(C)
Manolis Falelakis, Christos Diou, Anastasios Valsamidis and Anastasios Delopoulos
"Dynamic Semantic Identification with Complexity Constraints as a Knapsack Problem"
The 14th IEEE International Conference on Fuzzy Systems, 2005. FUZZ '05., IEEE, 2005 May
[Abstract][BibTex][pdf]

The process of automatic identification of high level semantic entities (e.g., objects, concepts or events) in multimedia documents requires processing by means of algorithms that are used for feature extraction, i.e. low level information needed for the analysis of these documents at a semantic level. This work copes with the high and often prohibitive computational complexity of this procedure. Emphasis is given to a dynamic scheme that allows for efficient distribution of the available computational resources in application. Scenarios that deal with the identification of multiple high level entities with strict simultaneous restrictions, such as real time applications.

@inproceedings{Falelakis2005Dynamic,
author={Manolis Falelakis and Christos Diou and Anastasios Valsamidis and Anastasios Delopoulos},
title={Dynamic Semantic Identification with Complexity Constraints as a Knapsack Problem},
booktitle={The 14th IEEE International Conference on Fuzzy Systems, 2005. FUZZ '05.},
publisher={IEEE},
year={2005},
month={05},
date={2005-05-25},
url={http://mug.ee.auth.gr/wp-content/uploads/Falelakis-Diou-Valsamidis-Delopoulos_FUZZIEEE05_paper2.pdf},
doi={http://10.1109/FUZZY.2005.1452456},
abstract={The process of automatic identification of high level semantic entities (e.g., objects, concepts or events) in multimedia documents requires processing by means of algorithms that are used for feature extraction, i.e. low level information needed for the analysis of these documents at a semantic level. This work copes with the high and often prohibitive computational complexity of this procedure. Emphasis is given to a dynamic scheme that allows for efficient distribution of the available computational resources in application. Scenarios that deal with the identification of multiple high level entities with strict simultaneous restrictions, such as real time applications.}
}

(C)
Manolis Falelakis, Christos Diou, Manolis Wallace and Anastasios Delopoulos
"Minimizing Uncertainty In Semantic Identification When Computing Resources Are Limited"
International Conference on Artificial Neural Networks (ICANN05), pp. 817-822, Springer Berlin Heidelberg, Warsaw, Poland, 2005 Jan
[Abstract][BibTex][pdf]

In this paper we examine the problem of automatic semantic identi?cation of entities in multimedia documents from a computing point of view. Speci?cally, we identify as main points to consider the storage of the required knowledge and the computational complexity of the handling of the knowledge as well as of the actual identi?cation process. In order to tackle the above we utilize (i) a sparse representation model for storage, (ii) a novel transitive closure algorithm for handling and (iii) a novel approach to identi?cation that allows for the speci?cation of computational boundaries.

@inproceedings{Falelakis2005Minimizing,
author={Manolis Falelakis and Christos Diou and Manolis Wallace and Anastasios Delopoulos},
title={Minimizing Uncertainty In Semantic Identification When Computing Resources Are Limited},
booktitle={International Conference on Artificial Neural Networks (ICANN05)},
pages={817-822},
publisher={Springer Berlin Heidelberg},
address={Warsaw, Poland},
year={2005},
month={01},
date={2005-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/10.1007_11550907_129.pdf},
doi={http://10.1007/11550907_129},
abstract={In this paper we examine the problem of automatic semantic identi?cation of entities in multimedia documents from a computing point of view. Speci?cally, we identify as main points to consider the storage of the required knowledge and the computational complexity of the handling of the knowledge as well as of the actual identi?cation process. In order to tackle the above we utilize (i) a sparse representation model for storage, (ii) a novel transitive closure algorithm for handling and (iii) a novel approach to identi?cation that allows for the speci?cation of computational boundaries.}
}

2004

(C)
Manolis Falelakis, Christos Diou and Anastasios Delopoulos
"Identification of Semantics: Balancing between Complexity and Validity"
2004 IEEE 6th Workshop on Multimedia Signal Processing, pp. 434-437, IEEE, Siena, Italy, 2004 Sep
[Abstract][BibTex][pdf]

This paper addresses the problem of identifying semantic entities (e.g., events, objects, concepts etc.) in a particular environment (e.g., a multimedia document, a scene, a signal etc.) by means of an appropriately modelled semantic encyclopedia. Each semantic entity in the encyclopedia is defined in terms of other semantic entities as well as low level features, which we call syntactic entities, in a hierarchical scheme. Furthermore, a methodology is introduced, which can be used to evaluate the direct contribution of every syntactic feature of the document to the identification of semantic entities. This information allows us to estimate the quality of the result as well as the required computational cost of the search procedure and to balance between them. Our approach could be particularly important in real time and/or bulky search/indexing applications.

@inproceedings{Falelakis2004Identification,
author={Manolis Falelakis and Christos Diou and Anastasios Delopoulos},
title={Identification of Semantics: Balancing between Complexity and Validity},
booktitle={2004 IEEE 6th Workshop on Multimedia Signal Processing},
pages={434-437},
publisher={IEEE},
address={Siena, Italy},
year={2004},
month={09},
date={2004-09-29},
url={http://mug.ee.auth.gr/wp-content/uploads/Falelakis-Diou-Delopoulos_MMSP04.pdf},
doi={http://10.1109/MMSP.2004.1436588},
abstract={This paper addresses the problem of identifying semantic entities (e.g., events, objects, concepts etc.) in a particular environment (e.g., a multimedia document, a scene, a signal etc.) by means of an appropriately modelled semantic encyclopedia. Each semantic entity in the encyclopedia is defined in terms of other semantic entities as well as low level features, which we call syntactic entities, in a hierarchical scheme. Furthermore, a methodology is introduced, which can be used to evaluate the direct contribution of every syntactic feature of the document to the identification of semantic entities. This information allows us to estimate the quality of the result as well as the required computational cost of the search procedure and to balance between them. Our approach could be particularly important in real time and/or bulky search/indexing applications.}
}

2003

(J)
C. Diou, Karwatka and Jacek
"Some methods of identification high clutter regions in radar tracking system"
Postepy Radiotechniki, 48, (147), pp. 3-15, 2003 Jan
[Abstract][BibTex]

@article{Diou2003Some,
author={C. Diou and Karwatka and Jacek},
title={Some methods of identification high clutter regions in radar tracking system},
journal={Postepy Radiotechniki},
volume={48},
number={147},
pages={3-15},
year={2003},
month={01},
date={2003-01-01}
}