Publications



2021

(J)
Alexandros Papadopoulos, Fotis Topouzis and Anastasios Delopoulos
Scientific Reports, 2021 Jul
[Abstract][BibTex][pdf]

Diabetic retinopathy (DR) is one of the leading causes of vision loss across the world. Yet despite its wide prevalence, the majority of affected people lack access to the specialized ophthalmologists and equipment required for monitoring their condition. This can lead to delays in the start of treatment, thereby lowering their chances for a successful outcome. Machine learning systems that automatically detect the disease in eye fundus images have been proposed as a means of facilitating access to retinopathy severity estimates for patients in remote regions or even for complementing the human expert’s diagnosis. Here we propose a machine learning system for the detection of referable diabetic retinopathy in fundus images, which is based on the paradigm of multiple-instance learning. Our method extracts local information independently from multiple rectangular image patches and combines it efficiently through an attention mechanism that focuses on the abnormal regions of the eye (i.e. those that contain DR-induced lesions), thus resulting in a final image representation that is suitable for classification. Furthermore, by leveraging the attention mechanism our algorithm can seamlessly produce informative heatmaps that highlight the regions where the lesions are located. We evaluate our approach on the publicly available Kaggle, Messidor-2 and IDRiD retinal image datasets, in which it exhibits near state-of-the-art classification performance (AUC of 0.961 in Kaggle and 0.976 in Messidor-2), while also producing valid lesion heatmaps (AUPRC of 0.869 in the 81 images of IDRiD that contain pixel-level lesion annotations). Our results suggest that the proposed approach provides an efficient and interpretable solution against the problem of automated diabetic retinopathy grading.

@article{alpapado2021,
author={Alexandros Papadopoulos and Fotis Topouzis and Anastasios Delopoulos},
title={An interpretable multiple-instance approach for the detection of referable diabetic retinopathy in fundus images},
journal={Scientific Reports},
year={2021},
month={07},
date={2021-07-12},
url={https://www.nature.com/articles/s41598-021-93632-8.pdf},
doi={https://doi.org/10.1038/s41598-021-93632-8},
abstract={Diabetic retinopathy (DR) is one of the leading causes of vision loss across the world. Yet despite its wide prevalence, the majority of affected people lack access to the specialized ophthalmologists and equipment required for monitoring their condition. This can lead to delays in the start of treatment, thereby lowering their chances for a successful outcome. Machine learning systems that automatically detect the disease in eye fundus images have been proposed as a means of facilitating access to retinopathy severity estimates for patients in remote regions or even for complementing the human expert’s diagnosis. Here we propose a machine learning system for the detection of referable diabetic retinopathy in fundus images, which is based on the paradigm of multiple-instance learning. Our method extracts local information independently from multiple rectangular image patches and combines it efficiently through an attention mechanism that focuses on the abnormal regions of the eye (i.e. those that contain DR-induced lesions), thus resulting in a final image representation that is suitable for classification. Furthermore, by leveraging the attention mechanism our algorithm can seamlessly produce informative heatmaps that highlight the regions where the lesions are located. We evaluate our approach on the publicly available Kaggle, Messidor-2 and IDRiD retinal image datasets, in which it exhibits near state-of-the-art classification performance (AUC of 0.961 in Kaggle and 0.976 in Messidor-2), while also producing valid lesion heatmaps (AUPRC of 0.869 in the 81 images of IDRiD that contain pixel-level lesion annotations). Our results suggest that the proposed approach provides an efficient and interpretable solution against the problem of automated diabetic retinopathy grading.}
}

(J)
Konstantinos Kyritsis, Petter Fagerberg, Ioannis Ioakimidis, K Ray Chaudhuri, Heinz Reichmann, Lisa Klingelhoefer and Anastasios Delopoulos
"Assessment of real life eating difficulties in Parkinson’s disease patients by measuring plate to mouth movement elongation with inertial sensors"
Scientific Reports, 11, pp. 1-14, 2021 Jan
[Abstract][BibTex][pdf]

Parkinson’s disease (PD) is a neurodegenerative disorder with both motor and non-motor symptoms. Despite the progressive nature of PD, early diagnosis, tracking the disease’s natural history and measuring the drug response are factors that play a major role in determining the quality of life of the affected individual. Apart from the common motor symptoms, i.e., tremor at rest, rigidity and bradykinesia, studies suggest that PD is associated with disturbances in eating behavior and energy intake. Specifically, PD is associated with drug-induced impulsive eating disorders such as binge eating, appetite-related non-motor issues such as weight loss and/or gain as well as dysphagia—factors that correlate with difficulties in completing day-to-day eating-related tasks. In this work we introduce Plate-to-Mouth (PtM), an indicator that relates with the time spent for the hand operating the utensil to transfer a quantity of food from the plate into the mouth during the course of a meal. We propose a two-step approach towards the objective calculation of PtM. Initially, we use the 3D acceleration and orientation velocity signals from an off-the-shelf smartwatch to detect the bite moments and upwards wrist micromovements that occur during a meal session. Afterwards, we process the upwards hand micromovements that appear prior to every detected bite during the meal in order to estimate the bite’s PtM duration. Finally, we use a density-based scheme to estimate the PtM durations distribution and form the in-meal eating behavior profile of the subject. In the results section, we provide validation for every step of the process independently, as well as showcase our findings using a total of three datasets, one collected in a controlled clinical setting using standardized meals (with a total of 28 meal sessions from 7 Healthy Controls (HC) and 21 PD patients) and two collected in-the-wild under free living conditions (37 meals from 4 HC/10 PD patients and 629 meals from 3 HC/3 PD patients, respectively). Experimental results reveal an Area Under the Curve (AUC) of 0.748 for the clinical dataset and 0.775/1.000 for the in-the-wild datasets towards the classification of in-meal eating behavior profiles to the PD or HC group. This is the first work that attempts to use wearable Inertial Measurement Unit (IMU) sensor data, collected both in clinical and in-the-wild settings, towards the extraction of an objective eating behavior indicator for PD.

@article{kyritsis2021assessment,
author={Konstantinos Kyritsis and Petter Fagerberg and Ioannis Ioakimidis and K Ray Chaudhuri and Heinz Reichmann and Lisa Klingelhoefer and Anastasios Delopoulos},
title={Assessment of real life eating difficulties in Parkinson’s disease patients by measuring plate to mouth movement elongation with inertial sensors},
journal={Scientific Reports},
volume={11},
pages={1-14},
year={2021},
month={01},
date={2021-01-15},
url={https://www.nature.com/articles/s41598-020-80394-y},
doi={https://doi.org/10.1038/s41598-020-80394-y},
keywords={Sensors;Computational science;Parkinson's disease;IMU;wearable;PD;Plate to mouth;PtM},
abstract={Parkinson’s disease (PD) is a neurodegenerative disorder with both motor and non-motor symptoms. Despite the progressive nature of PD, early diagnosis, tracking the disease’s natural history and measuring the drug response are factors that play a major role in determining the quality of life of the affected individual. Apart from the common motor symptoms, i.e., tremor at rest, rigidity and bradykinesia, studies suggest that PD is associated with disturbances in eating behavior and energy intake. Specifically, PD is associated with drug-induced impulsive eating disorders such as binge eating, appetite-related non-motor issues such as weight loss and/or gain as well as dysphagia—factors that correlate with difficulties in completing day-to-day eating-related tasks. In this work we introduce Plate-to-Mouth (PtM), an indicator that relates with the time spent for the hand operating the utensil to transfer a quantity of food from the plate into the mouth during the course of a meal. We propose a two-step approach towards the objective calculation of PtM. Initially, we use the 3D acceleration and orientation velocity signals from an off-the-shelf smartwatch to detect the bite moments and upwards wrist micromovements that occur during a meal session. Afterwards, we process the upwards hand micromovements that appear prior to every detected bite during the meal in order to estimate the bite’s PtM duration. Finally, we use a density-based scheme to estimate the PtM durations distribution and form the in-meal eating behavior profile of the subject. In the results section, we provide validation for every step of the process independently, as well as showcase our findings using a total of three datasets, one collected in a controlled clinical setting using standardized meals (with a total of 28 meal sessions from 7 Healthy Controls (HC) and 21 PD patients) and two collected in-the-wild under free living conditions (37 meals from 4 HC/10 PD patients and 629 meals from 3 HC/3 PD patients, respectively). Experimental results reveal an Area Under the Curve (AUC) of 0.748 for the clinical dataset and 0.775/1.000 for the in-the-wild datasets towards the classification of in-meal eating behavior profiles to the PD or HC group. This is the first work that attempts to use wearable Inertial Measurement Unit (IMU) sensor data, collected both in clinical and in-the-wild settings, towards the extraction of an objective eating behavior indicator for PD.}
}

2020

(J)
Alexandros Papadopoulos , Dimitrios Iakovakis, Lisa Klingelhoefer, Sevasti Bostantjopoulou, K. Ray Chaudhuri, Konstantinos Kyritsis, Stelios Hadjidimitriou, Vasileios Charisis , Leontios J. Hadjileontiadis and Anastasios Delopoulos
Scientific Reports, 2020 Dec
[Abstract][BibTex][pdf]

Parkinson’s Disease (PD) is the second most common neurodegenerative disorder, affecting more than 1% of the population above 60 years old with both motor and non-motor symptoms of escalating severity as it progresses. Since it cannot be cured, treatment options focus on the improvement of PD symptoms. In fact, evidence suggests that early PD intervention has the potential to slow down symptom progression and improve the general quality of life in the long term. However, the initial motor symptoms are usually very subtle and, as a result, patients seek medical assistance only when their condition has substantially deteriorated; thus, missing the opportunity for an improved clinical outcome. This situation highlights the need for accessible tools that can screen for early motor PD symptoms and alert individuals to act accordingly. Here we show that PD and its motor symptoms can unobtrusively be detected from the combination of accelerometer and touchscreen typing data that are passively captured during natural user-smartphone interaction. To this end, we introduce a deep learning framework that analyses such data to simultaneously predict tremor, fine-motor impairment and PD. In a validation dataset from 22 clinically-assessed subjects (8 Healthy Controls (HC)/14 PD patients with a total data contribution of 18.305 accelerometer and 2.922 typing sessions), the proposed approach achieved 0.86/0.93 sensitivity/specificity for the binary classification task of HC versus PD. Additional validation on data from 157 subjects (131 HC/26 PD with a total contribution of 76.528 accelerometer and 18.069 typing sessions) with self-reported health status (HC or PD), resulted in area under curve of 0.87, with sensitivity/specificity of 0.92/0.69 and 0.60/0.92 at the operating points of highest sensitivity or specificity, respectively. Our findings suggest that the proposed method can be used as a stepping stone towards the development of an accessible PD screening tool that will passively monitor the subject-smartphone interaction for signs of PD and which could be used to reduce the critical gap between disease onset and start of treatment.

@article{alpapado2020,
author={Alexandros Papadopoulos and Dimitrios Iakovakis and Lisa Klingelhoefer and Sevasti Bostantjopoulou and K. Ray Chaudhuri and Konstantinos Kyritsis and Stelios Hadjidimitriou and Vasileios Charisis and Leontios J. Hadjileontiadis and Anastasios Delopoulos},
title={Unobtrusive detection of Parkinson’s disease from multi?modal and in?the?wild sensor data using deep learning techniques},
journal={Scientific Reports},
year={2020},
month={12},
date={2020-12-07},
url={https://www.nature.com/articles/s41598-020-78418-8},
doi={https://doi.org/10.1038/s41598-020-78418-8},
abstract={Parkinson’s Disease (PD) is the second most common neurodegenerative disorder, affecting more than 1% of the population above 60 years old with both motor and non-motor symptoms of escalating severity as it progresses. Since it cannot be cured, treatment options focus on the improvement of PD symptoms. In fact, evidence suggests that early PD intervention has the potential to slow down symptom progression and improve the general quality of life in the long term. However, the initial motor symptoms are usually very subtle and, as a result, patients seek medical assistance only when their condition has substantially deteriorated; thus, missing the opportunity for an improved clinical outcome. This situation highlights the need for accessible tools that can screen for early motor PD symptoms and alert individuals to act accordingly. Here we show that PD and its motor symptoms can unobtrusively be detected from the combination of accelerometer and touchscreen typing data that are passively captured during natural user-smartphone interaction. To this end, we introduce a deep learning framework that analyses such data to simultaneously predict tremor, fine-motor impairment and PD. In a validation dataset from 22 clinically-assessed subjects (8 Healthy Controls (HC)/14 PD patients with a total data contribution of 18.305 accelerometer and 2.922 typing sessions), the proposed approach achieved 0.86/0.93 sensitivity/specificity for the binary classification task of HC versus PD. Additional validation on data from 157 subjects (131 HC/26 PD with a total contribution of 76.528 accelerometer and 18.069 typing sessions) with self-reported health status (HC or PD), resulted in area under curve of 0.87, with sensitivity/specificity of 0.92/0.69 and 0.60/0.92 at the operating points of highest sensitivity or specificity, respectively. Our findings suggest that the proposed method can be used as a stepping stone towards the development of an accessible PD screening tool that will passively monitor the subject-smartphone interaction for signs of PD and which could be used to reduce the critical gap between disease onset and start of treatment.}
}

(J)
Konstantinos Kyritsis, Christos Diou and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics, 2020 Apr
[Abstract][BibTex][pdf]

The increased worldwide prevalence of obesity has sparked the interest of the scientific community towards tools that objectively and automatically monitor eating behavior. Despite the study of obesity being in the spotlight, such tools can also be used to study eating disorders (e.g. anorexia nervosa) or provide a personalized monitoring platform for patients or athletes. This paper presents a complete framework towards the automated i) modeling of in-meal eating behavior and ii) temporal localization of meals, from raw inertial data collected in-the-wild using commercially available smartwatches. Initially, we present an end-to-end Neural Network which detects food intake events (i.e. bites). The proposed network uses both convolutional and recurrent layers that are trained simultaneously. Subsequently, we show how the distribution of the detected bites throughout the day can be used to estimate the start and end points of meals, using signal processing algorithms. We perform extensive evaluation on each framework part individually. Leave-one-subject-out (LOSO) evaluation shows that our bite detection approach outperforms four state-of-the-art algorithms towards the detection of bites during the course of a meal (0.923 F1 score). Furthermore, LOSO and held-out set experiments regarding the estimation of meal start/end points reveal that the proposed approach outperforms a relevant approach found in the literature (Jaccard Index of 0.820 and 0.821 for the LOSO and held-out experiments, respectively). Experiments are performed using our publicly available FIC and the newly introduced FreeFIC datasets.

@article{kyritsis2020data,
author={Konstantinos Kyritsis and Christos Diou and Anastasios Delopoulos},
title={A Data Driven End-to-end Approach for In-the-wild Monitoring of Eating Behavior Using Smartwatches},
journal={IEEE Journal of Biomedical and Health Informatics},
year={2020},
month={04},
date={2020-04-03},
url={http://mug.ee.auth.gr/wp-content/uploads/kokirits2020data.pdf},
doi={http://10.1109/JBHI.2020.2984907},
keywords={Wearable sensors;biomedical signal processing},
abstract={The increased worldwide prevalence of obesity has sparked the interest of the scientific community towards tools that objectively and automatically monitor eating behavior. Despite the study of obesity being in the spotlight, such tools can also be used to study eating disorders (e.g. anorexia nervosa) or provide a personalized monitoring platform for patients or athletes. This paper presents a complete framework towards the automated i) modeling of in-meal eating behavior and ii) temporal localization of meals, from raw inertial data collected in-the-wild using commercially available smartwatches. Initially, we present an end-to-end Neural Network which detects food intake events (i.e. bites). The proposed network uses both convolutional and recurrent layers that are trained simultaneously. Subsequently, we show how the distribution of the detected bites throughout the day can be used to estimate the start and end points of meals, using signal processing algorithms. We perform extensive evaluation on each framework part individually. Leave-one-subject-out (LOSO) evaluation shows that our bite detection approach outperforms four state-of-the-art algorithms towards the detection of bites during the course of a meal (0.923 F1 score). Furthermore, LOSO and held-out set experiments regarding the estimation of meal start/end points reveal that the proposed approach outperforms a relevant approach found in the literature (Jaccard Index of 0.820 and 0.821 for the LOSO and held-out experiments, respectively). Experiments are performed using our publicly available FIC and the newly introduced FreeFIC datasets.}
}

2019

(J)
Alexandros Papadopoulos, Konstantinos Kyritsis, Lisa Klingelhoefer, Sevasti Bostanjopoulou, K. Ray Chaudhuri and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics, 2019 Dec
[Abstract][BibTex][pdf]

Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old, causing symptoms that are subtle at first, but whose intensity increases as the disease progresses. Automated detection of these symptoms could offer clues as to the early onset of the disease, thus improving the expected clinical outcomes of the patients via appropriately targeted interventions. This potential has led many researchers to develop methods that use widely available sensors to measure and quantify the presence of PD symptoms such as tremor, rigidity and braykinesia. However, most of these approaches operate under controlled settings, such as in lab or at home, thus limiting their applicability under free-living conditions. In this work, we present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device. We propose a Multiple-Instance Learning approach, wherein a subject is represented as an unordered bag of accelerometer signal segments and a single, expert-provided, tremor annotation. Our method combines deep feature learning with a learnable pooling stage that is able to identify key instances within the subject bag, while still being trainable end-to-end. We validate our algo- rithm on a newly introduced dataset of 45 subjects, containing accelerometer signals collected entirely in-the-wild. The good classification performance obtained in the conducted experiments suggests that the proposed method can efficiently navigate the noisy environment of in-the-wild recordings.

@article{alpapado2019detecting,
author={Alexandros Papadopoulos and Konstantinos Kyritsis and Lisa Klingelhoefer and Sevasti Bostanjopoulou and K. Ray Chaudhuri and Anastasios Delopoulos},
title={Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using Deep Multiple-Instance Learning},
journal={IEEE Journal of Biomedical and Health Informatics},
year={2019},
month={12},
date={2019-12-24},
url={http://mug.ee.auth.gr/wp-content/uploads/alpapado2019detecting.pdf},
doi={http://10.1109/JBHI.2019.2961748},
abstract={Parkinson\'s Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old, causing symptoms that are subtle at first, but whose intensity increases as the disease progresses. Automated detection of these symptoms could offer clues as to the early onset of the disease, thus improving the expected clinical outcomes of the patients via appropriately targeted interventions. This potential has led many researchers to develop methods that use widely available sensors to measure and quantify the presence of PD symptoms such as tremor, rigidity and braykinesia. However, most of these approaches operate under controlled settings, such as in lab or at home, thus limiting their applicability under free-living conditions. In this work, we present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device. We propose a Multiple-Instance Learning approach, wherein a subject is represented as an unordered bag of accelerometer signal segments and a single, expert-provided, tremor annotation. Our method combines deep feature learning with a learnable pooling stage that is able to identify key instances within the subject bag, while still being trainable end-to-end. We validate our algo- rithm on a newly introduced dataset of 45 subjects, containing accelerometer signals collected entirely in-the-wild. The good classification performance obtained in the conducted experiments suggests that the proposed method can efficiently navigate the noisy environment of in-the-wild recordings.}
}

(J)
Christos Diou, Ioannis Sarafis, Vasileios Papapanagiotou, Ioannis Ioakimidis and Anastasios Delopoulos
Statistical Journal of the IAOS, 35, (4), pp. 677-690, 2019 Dec
[Abstract][BibTex][pdf]

The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data. BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data.

@article{DiouIAOS2019,
author={Christos Diou and Ioannis Sarafis and Vasileios Papapanagiotou and Ioannis Ioakimidis and Anastasios Delopoulos},
title={A methodology for obtaining objective measurements of population obesogenic behaviors in relation to the environment},
journal={Statistical Journal of the IAOS},
volume={35},
number={4},
pages={677-690},
year={2019},
month={12},
date={2019-12-10},
url={https://arxiv.org/pdf/1911.08315.pdf},
doi={http://10.3233/SJI-190537},
abstract={The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of The way we eat and what we eat, the way we move and the way we sleep significantly impact the risk of becoming obese. These aspects of behavior decompose into several personal behavioral elements including our food choices, eating place preferences, transportation choices, sleeping periods and duration etc. Most of these elements are highly correlated in a causal way with the conditions of our local urban, social, regulatory and economic environment. To this end, the H2020 project “BigO: Big Data Against Childhood Obesity” (http://bigoprogram.eu) aims to create new sources of evidence together with exploration tools, assisting the Public Health Authorities in their effort to tackle childhood obesity. In this paper, we present the technology-based methodology that has been developed in the context of BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data. BigO in order to: (a) objectively monitor a matrix of a population’s obesogenic behavioral elements using commonly available wearable sensors (accelerometers, gyroscopes, GPS), embedded in smart phones and smart watches; (b) acquire information for the environment from open and online data sources; (c) provide aggregation mechanisms to correlate the population behaviors with the environmental characteristics; (d) ensure the privacy protection of the participating individuals; and (e) quantify the quality of the collected big data.}
}

(J)
Konstantinos Kyritsis, Christos Diou and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics (JBHI), 2019 Jan
[Abstract][BibTex][pdf]

Overweight and obesity are both associated with in-meal eating parameters such as eating speed. Recently, the plethora of available wearable devices in the market ignited the interest of both the scientific community and the industry towards unobtrusive solutions for eating behavior monitoring. In this paper we present an algorithm for automatically detecting the in-meal food intake cycles using the inertial signals (acceleration and orientation velocity) from an off-the-shelf smartwatch. We use 5 specific wrist micromovements to model the series of actions leading to and following an intake event (i.e. bite). Food intake detection is performed in two steps. In the first step we process windows of raw sensor streams and estimate their micromovement probability distributions by means of a Convolutional Neural Network (CNN). In the second step we use a Long-Short Term Memory (LSTM) network to capture the temporal evolution and classify sequences of windows as food intake cycles. Evaluation is performed using a challenging dataset of 21 meals from 12 subjects. In our experiments we compare the performance of our algorithm against three state-of-the-art approaches, where our approach achieves the highest F1 detection score (0.913 in the Leave-One-Subject-Out experiment). The dataset used in the experiments is available at https://mug.ee.auth.gr/intake-cycle-detection/.

@article{kyritsis2019modeling,
author={Konstantinos Kyritsis and Christos Diou and Anastasios Delopoulos},
title={Modeling Wrist Micromovements to Measure In-Meal Eating Behavior from Inertial Sensor Data},
journal={IEEE Journal of Biomedical and Health Informatics (JBHI)},
year={2019},
month={01},
date={2019-01-09},
url={http://mug.ee.auth.gr/wp-content/uploads/kyritsis2019modeling.pdf},
doi={http://10.1109/JBHI.2019.2892011},
abstract={Overweight and obesity are both associated with in-meal eating parameters such as eating speed. Recently, the plethora of available wearable devices in the market ignited the interest of both the scientific community and the industry towards unobtrusive solutions for eating behavior monitoring. In this paper we present an algorithm for automatically detecting the in-meal food intake cycles using the inertial signals (acceleration and orientation velocity) from an off-the-shelf smartwatch. We use 5 specific wrist micromovements to model the series of actions leading to and following an intake event (i.e. bite). Food intake detection is performed in two steps. In the first step we process windows of raw sensor streams and estimate their micromovement probability distributions by means of a Convolutional Neural Network (CNN). In the second step we use a Long-Short Term Memory (LSTM) network to capture the temporal evolution and classify sequences of windows as food intake cycles. Evaluation is performed using a challenging dataset of 21 meals from 12 subjects. In our experiments we compare the performance of our algorithm against three state-of-the-art approaches, where our approach achieves the highest F1 detection score (0.913 in the Leave-One-Subject-Out experiment). The dataset used in the experiments is available at https://mug.ee.auth.gr/intake-cycle-detection/.}
}

(J)
Langlet, Billy, Fagerberg, Petter, Delopoulos, Anastasios, Papapanagiotou, Vasileios, Diou, Christos, Maramis, Christos, Maglaveras, Nikolaos, Anvret, Anna, Ioakimidis and Ioannis
Nutrients, 11, (3), pp. 672, 2019 Mar
[Abstract][BibTex][pdf]

Large portion sizes and a high eating rate are associated with high energy intake and obesity. Most individuals maintain their food intake weight (g) and eating rate (g/min) rank in relation to their peers, despite food and environmental manipulations. Single meal measures may enable identification of “large portion eaters” and “fast eaters,” finding individuals at risk of developing obesity. The aim of this study was to predict real-life food intake weight and eating rate based on one school lunch. Twenty-four high-school students with a mean (±SD) age of 16.8 yr (±0.7) and body mass index of 21.9 (±4.1) were recruited, using no exclusion criteria. Food intake weight and eating rate was first self-rated (“Less,” “Average” or “More than peers”), then objectively recorded during one school lunch (absolute weight of consumed food in grams). Afterwards, subjects recorded as many main meals (breakfasts, lunches and dinners) as possible in real-life for a period of at least two weeks, using a Bluetooth connected weight scale and a smartphone application. On average participants recorded 18.9 (7.3) meals during the study. Real-life food intake weight was 327.4 g (±110.6), which was significantly lower (p = 0.027) than the single school lunch, at 367.4 g (±167.2). When the intra-class correlation of food weight intake between the objectively recorded real-life and school lunch meals was compared, the correlation was excellent (R = 0.91). Real-life eating rate was 33.5 g/min (±14.8), which was significantly higher (p = 0.010) than the single school lunch, at 27.7 g/min (±13.3). The intra-class correlation of the recorded eating rate between real-life and school lunch meals was very large (R = 0.74). The participants’ recorded food intake weights and eating rates were divided into terciles and compared between school lunches and real-life, with moderate or higher agreement (? = 0.75 and ? = 0.54, respectively). In contrast, almost no agreement was observed between self-rated and real-life recorded rankings of food intake weight and eating rate (? = 0.09 and ? = 0.08, respectively). The current study provides evidence that both food intake weight and eating rates per meal vary considerably in real-life per individual. However, based on these behaviours, most students can be correctly classified in regard to their peers based on single school lunches. In contrast, self-reported food intake weight and eating rate are poor predictors of real-life measures. Finally, based on the recorded individual variability of real-life food intake weight and eating rate, it is not advised to rank individuals based on single recordings collected in real-life settings

@article{Langlet2019Predicting,
author={Langlet and Billy and Fagerberg and Petter and Delopoulos and Anastasios and Papapanagiotou and Vasileios and Diou and Christos and Maramis and Christos and Maglaveras and Nikolaos and Anvret and Anna and Ioakimidis and Ioannis},
title={Predicting Real-Life Eating Behaviours Using Single School Lunches in Adolescents},
journal={Nutrients},
volume={11},
number={3},
pages={672},
year={2019},
month={03},
date={2019-03-20},
url={https://www.mdpi.com/2072-6643/11/3/672/pdf},
doi={https://doi.org/10.3390/nu11030672},
abstract={Large portion sizes and a high eating rate are associated with high energy intake and obesity. Most individuals maintain their food intake weight (g) and eating rate (g/min) rank in relation to their peers, despite food and environmental manipulations. Single meal measures may enable identification of “large portion eaters” and “fast eaters,” finding individuals at risk of developing obesity. The aim of this study was to predict real-life food intake weight and eating rate based on one school lunch. Twenty-four high-school students with a mean (±SD) age of 16.8 yr (±0.7) and body mass index of 21.9 (±4.1) were recruited, using no exclusion criteria. Food intake weight and eating rate was first self-rated (“Less,” “Average” or “More than peers”), then objectively recorded during one school lunch (absolute weight of consumed food in grams). Afterwards, subjects recorded as many main meals (breakfasts, lunches and dinners) as possible in real-life for a period of at least two weeks, using a Bluetooth connected weight scale and a smartphone application. On average participants recorded 18.9 (7.3) meals during the study. Real-life food intake weight was 327.4 g (±110.6), which was significantly lower (p = 0.027) than the single school lunch, at 367.4 g (±167.2). When the intra-class correlation of food weight intake between the objectively recorded real-life and school lunch meals was compared, the correlation was excellent (R = 0.91). Real-life eating rate was 33.5 g/min (±14.8), which was significantly higher (p = 0.010) than the single school lunch, at 27.7 g/min (±13.3). The intra-class correlation of the recorded eating rate between real-life and school lunch meals was very large (R = 0.74). The participants’ recorded food intake weights and eating rates were divided into terciles and compared between school lunches and real-life, with moderate or higher agreement (? = 0.75 and ? = 0.54, respectively). In contrast, almost no agreement was observed between self-rated and real-life recorded rankings of food intake weight and eating rate (? = 0.09 and ? = 0.08, respectively). The current study provides evidence that both food intake weight and eating rates per meal vary considerably in real-life per individual. However, based on these behaviours, most students can be correctly classified in regard to their peers based on single school lunches. In contrast, self-reported food intake weight and eating rate are poor predictors of real-life measures. Finally, based on the recorded individual variability of real-life food intake weight and eating rate, it is not advised to rank individuals based on single recordings collected in real-life settings}
}

2018

(J)
Janet van den Boer, Annemiek van der Lee, Lingchuan Zhou, Vasileios Papapanagiotou, Christos Diou, Anastasios Delopoulos and Monica Mars
The SPLENDID Eating Detection Sensor: Development and Feasibility Study, 6, (9), pp. 170, 2018 Sep
[Abstract][BibTex]

The available methods for monitoring food intake---which for a great part rely on self-report---often provide biased and incomplete data. Currently, no good technological solutions are available. Hence, the SPLENDID eating detection sensor (an ear-worn device with an air microphone and a photoplethysmogram [PPG] sensor) was developed to enable complete and objective measurements of eating events. The technical performance of this device has been described before. To date, literature is lacking a description of how such a device is perceived and experienced by potential users. Objective: The objective of our study was to explore how potential users perceive and experience the SPLENDID eating detection sensor. Methods: Potential users evaluated the eating detection sensor at different stages of its development: (1) At the start, 12 health professionals (eg, dieticians, personal trainers) were interviewed and a focus group was held with 5 potential end users to find out their thoughts on the concept of the eating detection sensor. (2) Then, preliminary prototypes of the eating detection sensor were tested in a laboratory setting where 23 young adults reported their experiences. (3) Next, the first wearable version of the eating detection sensor was tested in a semicontrolled study where 22 young, overweight adults used the sensor on 2 separate days (from lunch till dinner) and reported their experiences. (4) The final version of the sensor was tested in a 4-week feasibility study by 20 young, overweight adults who reported their experiences. Results: Throughout all the development stages, most individuals were enthusiastic about the eating detection sensor. However, it was stressed multiple times that it was critical that the device be discreet and comfortable to wear for a longer period. In the final study, the eating detection sensor received an average grade of 3.7 for wearer comfort on a scale of 1 to 10. Moreover, experienced discomfort was the main reason for wearing the eating detection sensor <2 hours a day. The participants reported having used the eating detection sensor on 19/28 instructed days on average. Conclusions: The SPLENDID eating detection sensor, which uses an air microphone and a PPG sensor, is a promising new device that can facilitate the collection of reliable food intake data, as shown by its technical potential. Potential users are enthusiastic, but to be successful wearer comfort and discreetness of the device need to be improved.

@article{2018Boer,
author={Janet van den Boer and Annemiek van der Lee and Lingchuan Zhou and Vasileios Papapanagiotou and Christos Diou and Anastasios Delopoulos and Monica Mars},
title={The SPLENDID Eating Detection Sensor: Development and Feasibility Study},
journal={The SPLENDID Eating Detection Sensor: Development and Feasibility Study},
volume={6},
number={9},
pages={170},
year={2018},
month={09},
date={2018-09-04},
doi={https://doi.org/10.2196/mhealth.9781},
issn={2291-5222},
abstract={The available methods for monitoring food intake---which for a great part rely on self-report---often provide biased and incomplete data. Currently, no good technological solutions are available. Hence, the SPLENDID eating detection sensor (an ear-worn device with an air microphone and a photoplethysmogram [PPG] sensor) was developed to enable complete and objective measurements of eating events. The technical performance of this device has been described before. To date, literature is lacking a description of how such a device is perceived and experienced by potential users. Objective: The objective of our study was to explore how potential users perceive and experience the SPLENDID eating detection sensor. Methods: Potential users evaluated the eating detection sensor at different stages of its development: (1) At the start, 12 health professionals (eg, dieticians, personal trainers) were interviewed and a focus group was held with 5 potential end users to find out their thoughts on the concept of the eating detection sensor. (2) Then, preliminary prototypes of the eating detection sensor were tested in a laboratory setting where 23 young adults reported their experiences. (3) Next, the first wearable version of the eating detection sensor was tested in a semicontrolled study where 22 young, overweight adults used the sensor on 2 separate days (from lunch till dinner) and reported their experiences. (4) The final version of the sensor was tested in a 4-week feasibility study by 20 young, overweight adults who reported their experiences. Results: Throughout all the development stages, most individuals were enthusiastic about the eating detection sensor. However, it was stressed multiple times that it was critical that the device be discreet and comfortable to wear for a longer period. In the final study, the eating detection sensor received an average grade of 3.7 for wearer comfort on a scale of 1 to 10. Moreover, experienced discomfort was the main reason for wearing the eating detection sensor <2 hours a day. The participants reported having used the eating detection sensor on 19/28 instructed days on average. Conclusions: The SPLENDID eating detection sensor, which uses an air microphone and a PPG sensor, is a promising new device that can facilitate the collection of reliable food intake data, as shown by its technical potential. Potential users are enthusiastic, but to be successful wearer comfort and discreetness of the device need to be improved.}
}

(J)
Christos Diou, Pantelis Lelekas and Anastasios Delopoulos
Journal of Imaging, 4, (11), pp. 125, 2018 Oct
[Abstract][BibTex]

Background: Evidence-based policymaking requires data about the local population’s socioeconomic status (SES) at detailed geographical level, however, such information is often not available, or is too expensive to acquire. Researchers have proposed solutions to estimate SES indicators by analyzing Google Street View images, however, these methods are also resource-intensive, since they require large volumes of manually labeled training data. (2) Methods: We propose a methodology for automatically computing surrogate variables of SES indicators using street images of parked cars and deep multiple instance learning. Our approach does not require any manually created labels, apart from data already available by statistical authorities, while the entire pipeline for image acquisition, parked car detection, car classification, and surrogate variable computation is fully automated. The proposed surrogate variables are then used in linear regression models to estimate the target SES indicators. (3) Results: We implement and evaluate a model based on the proposed surrogate variable at 30 municipalities of varying SES in Greece. Our model has R2=0.76 and a correlation coefficient of 0.874 with the true unemployment rate, while it achieves a mean absolute percentage error of 0.089 and mean absolute error of 1.87 on a held-out test set. Similar results are also obtained for other socioeconomic indicators, related to education level and occupational prestige. (4) Conclusions: The proposed methodology can be used to estimate SES indicators at the local level automatically, using images of parked cars detected via Google Street View, without the need for any manual labeling effort

@article{Diou2018JI,
author={Christos Diou and Pantelis Lelekas and Anastasios Delopoulos},
title={Image-Based Surrogates of Socio-Economic Status in Urban Neighborhoods Using Deep Multiple Instance Learning},
journal={Journal of Imaging},
volume={4},
number={11},
pages={125},
year={2018},
month={10},
date={2018-10-23},
doi={http://10.3390/jimaging4110125},
issn={2313-433X},
abstract={Background: Evidence-based policymaking requires data about the local population’s socioeconomic status (SES) at detailed geographical level, however, such information is often not available, or is too expensive to acquire. Researchers have proposed solutions to estimate SES indicators by analyzing Google Street View images, however, these methods are also resource-intensive, since they require large volumes of manually labeled training data. (2) Methods: We propose a methodology for automatically computing surrogate variables of SES indicators using street images of parked cars and deep multiple instance learning. Our approach does not require any manually created labels, apart from data already available by statistical authorities, while the entire pipeline for image acquisition, parked car detection, car classification, and surrogate variable computation is fully automated. The proposed surrogate variables are then used in linear regression models to estimate the target SES indicators. (3) Results: We implement and evaluate a model based on the proposed surrogate variable at 30 municipalities of varying SES in Greece. Our model has R2=0.76 and a correlation coefficient of 0.874 with the true unemployment rate, while it achieves a mean absolute percentage error of 0.089 and mean absolute error of 1.87 on a held-out test set. Similar results are also obtained for other socioeconomic indicators, related to education level and occupational prestige. (4) Conclusions: The proposed methodology can be used to estimate SES indicators at the local level automatically, using images of parked cars detected via Google Street View, without the need for any manual labeling effort}
}

(J)
Maryam Esfandiari, Vasilis Papapanagiotou, Christos Diou, Modjtaba Zandian, Jenny Nolstam, Per Södersten and Cecilia Bergh
JoVE, (135), 2018 May
[Abstract][BibTex]

Subjects eat food from a plate that sits on a scale connected to a computer that records the weight loss of the plate during the meal and makes up a curve of food intake, meal duration and rate of eating modeled by a quadratic equation. The purpose of the method is to change eating behavior by providing visual feedback on the computer screen that the subject can adapt to because her/his own rate of eating appears on the screen during the meal. The data generated by the method is automatically analyzed and fitted to the quadratic equation using a custom made algorithm. The method has the advantage of recording eating behavior objectively and offers the possibility of changing eating behavior both in experiments and in clinical practice. A limitation may be that experimental subjects are affected by the method. The same limitation may be an advantage in clinical practice, as eating behavior is more easily stabilized by the method. A treatment that uses this method has normalized body weight and restored the health of several hundred patients with anorexia nervosa and other eating disorders and has reduced the weight and improved the health of severely overweight patients.

@article{Esfandiari2018,
author={Maryam Esfandiari and Vasilis Papapanagiotou and Christos Diou and Modjtaba Zandian and Jenny Nolstam and Per Södersten and Cecilia Bergh},
title={Control of Eating Behavior Using a Novel Feedback System},
journal={JoVE},
number={135},
year={2018},
month={05},
date={2018-05-08},
doi={http://10.3791/57432},
abstract={Subjects eat food from a plate that sits on a scale connected to a computer that records the weight loss of the plate during the meal and makes up a curve of food intake, meal duration and rate of eating modeled by a quadratic equation. The purpose of the method is to change eating behavior by providing visual feedback on the computer screen that the subject can adapt to because her/his own rate of eating appears on the screen during the meal. The data generated by the method is automatically analyzed and fitted to the quadratic equation using a custom made algorithm. The method has the advantage of recording eating behavior objectively and offers the possibility of changing eating behavior both in experiments and in clinical practice. A limitation may be that experimental subjects are affected by the method. The same limitation may be an advantage in clinical practice, as eating behavior is more easily stabilized by the method. A treatment that uses this method has normalized body weight and restored the health of several hundred patients with anorexia nervosa and other eating disorders and has reduced the weight and improved the health of severely overweight patients.}
}

(J)
George Mamalakis, Christos Diou, Andreas Symeonidis and Leonidas Georgiadis
Neural Computing and Applications, 2018 Jul
[Abstract][BibTex]

In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon's file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the {\$}{\$}FI^2DS{\$}{\$}FI2DSfile system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.

@article{Mamalakis2018,
author={George Mamalakis and Christos Diou and Andreas Symeonidis and Leonidas Georgiadis},
title={Of daemons and men: reducing false positive rate in intrusion detection systems with file system footprint analysis},
journal={Neural Computing and Applications},
year={2018},
month={07},
date={2018-07-05},
doi={http://10.1007/s00521-018-3550-x},
issn={1433-3058},
abstract={In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon\'s file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the {\\$}{\\$}FI^2DS{\\$}{\\$}FI2DSfile system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.}
}

(J)
Ioannis Sarafis, Christos Diou and Anastasios Delopoulos
CoRR, abs/1809.06124, 2018 Sep
[Abstract][BibTex][pdf]

Weighted SVM (or fuzzy SVM) is the most widely used SVM variant owning its effectiveness to the use of instance weights. Proper selection of the instance weights can lead to increased generalization performance. In this work, we extend the span error bound theory to weighted SVM and we introduce effective hyperparameter selection methods for the weighted SVM algorithm. The significance of the presented work is that enables the application of span bound and span-rule with weighted SVM. The span bound is an upper bound of the leave-one-out error that can be calculated using a single trained SVM model. This is important since leave-one-out error is an almost unbiased estimator of the test error. Similarly, the span-rule gives the actual value of the leave-one-out error. Thus, one can apply span bound and span-rule as computationally lightweight alternatives of leave-one-out procedure for hyperparameter selection. The main theoretical contributions are: (a) we prove the necessary and sufficient condition for the existence of the span of a support vector in weighted SVM; and (b) we prove the extension of span bound and span-rule to weighted SVM. We experimentally evaluate the span bound and the span-rule for hyperparameter selection and we compare them with other methods that are applicable to weighted SVM: the K-fold cross-validation and the $\xi - \alpha$ bound. Experiments on 14 benchmark data sets and data sets with importance scores for the training instances show that: (a) the condition for the existence of span in weighted SVM is satisfied almost always; (b) the span-rule is the most effective method for weighted SVM hyperparameter selection; (c) the span-rule is the best predictor of the test error in the mean square error sense; and (d) the span-rule is efficient and, for certain problems, it can be calculated faster than K-fold cross-validation.

@article{Sarafis2018CoRR,
author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},
title={Span error bound for weighted SVM with applications in hyperparameter selection (preprint)},
journal={CoRR},
volume={abs/1809.06124},
year={2018},
month={09},
date={2018-09-17},
url={https://arxiv.org/pdf/1809.06124.pdf},
abstract={Weighted SVM (or fuzzy SVM) is the most widely used SVM variant owning its effectiveness to the use of instance weights. Proper selection of the instance weights can lead to increased generalization performance. In this work, we extend the span error bound theory to weighted SVM and we introduce effective hyperparameter selection methods for the weighted SVM algorithm. The significance of the presented work is that enables the application of span bound and span-rule with weighted SVM. The span bound is an upper bound of the leave-one-out error that can be calculated using a single trained SVM model. This is important since leave-one-out error is an almost unbiased estimator of the test error. Similarly, the span-rule gives the actual value of the leave-one-out error. Thus, one can apply span bound and span-rule as computationally lightweight alternatives of leave-one-out procedure for hyperparameter selection. The main theoretical contributions are: (a) we prove the necessary and sufficient condition for the existence of the span of a support vector in weighted SVM; and (b) we prove the extension of span bound and span-rule to weighted SVM. We experimentally evaluate the span bound and the span-rule for hyperparameter selection and we compare them with other methods that are applicable to weighted SVM: the K-fold cross-validation and the $\\xi - \\alpha$ bound. Experiments on 14 benchmark data sets and data sets with importance scores for the training instances show that: (a) the condition for the existence of span in weighted SVM is satisfied almost always; (b) the span-rule is the most effective method for weighted SVM hyperparameter selection; (c) the span-rule is the best predictor of the test error in the mean square error sense; and (d) the span-rule is efficient and, for certain problems, it can be calculated faster than K-fold cross-validation.}
}

(J)
Vasilis Papapanagiotou, Christos Diou, Ioannis Ioakimidis, Per Sodersten and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics, PP, (99), pp. 1-1, 2018 Mar
[Abstract][BibTex][pdf]

The structure of the cumulative food intake (CFI) curve has been associated with obesity and eating disorders. Scales that record the weight loss of a plate from which a subject eats food are used for capturing this curve; however, their measurements are contaminated by additive noise and are distorted by certain types of artifacts. This paper presents an algorithm for automatically processing continuous in-meal weight measurements in order to extract the clean CFI curve and in-meal eating indicators, such as total food intake and food intake rate. The algorithm relies on the representation of the weight-time series by a string of symbols that correspond to events such as bites or food additions. A context-free grammar is next used to model a meal as a sequence of such events. The selection of the most likely parse tree is finally used to determine the predicted eating sequence. The algorithm is evaluated on a dataset of 113 meals collected using the Mandometer, a scale that continuously samples plate weight during eating. We evaluate the effectiveness for seven indicators, and for bite-instance detection. We compare our approach with three state-of-the-art algorithms, and achieve the lowest error rates for most indicators (24 g for total meal weight). The proposed algorithm extracts the parameters of the CFI curve automatically, eliminating the need for manual data processing, and thus facilitating large-scale studies of eating behavior.

@article{Vassilis2018,
author={Vasilis Papapanagiotou and Christos Diou and Ioannis Ioakimidis and Per Sodersten and Anastasios Delopoulos},
title={Automatic analysis of food intake and meal microstructure based on continuous weight measurements},
journal={IEEE Journal of Biomedical and Health Informatics},
volume={PP},
number={99},
pages={1-1},
year={2018},
month={03},
date={2018-03-05},
url={http://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2018automated.pdf},
doi={http://10.1109/JBHI.2018.2812243},
abstract={The structure of the cumulative food intake (CFI) curve has been associated with obesity and eating disorders. Scales that record the weight loss of a plate from which a subject eats food are used for capturing this curve; however, their measurements are contaminated by additive noise and are distorted by certain types of artifacts. This paper presents an algorithm for automatically processing continuous in-meal weight measurements in order to extract the clean CFI curve and in-meal eating indicators, such as total food intake and food intake rate. The algorithm relies on the representation of the weight-time series by a string of symbols that correspond to events such as bites or food additions. A context-free grammar is next used to model a meal as a sequence of such events. The selection of the most likely parse tree is finally used to determine the predicted eating sequence. The algorithm is evaluated on a dataset of 113 meals collected using the Mandometer, a scale that continuously samples plate weight during eating. We evaluate the effectiveness for seven indicators, and for bite-instance detection. We compare our approach with three state-of-the-art algorithms, and achieve the lowest error rates for most indicators (24 g for total meal weight). The proposed algorithm extracts the parameters of the CFI curve automatically, eliminating the need for manual data processing, and thus facilitating large-scale studies of eating behavior.}
}

2017

(J)
Billy Langlet, Anna Anvret, Christos Maramis, Ioannis Moulos, Vasileios Papapanagiotou, Christos Diou, Eirini Lekka, Rachel Heimeier, Anastasios Delopoulos and Ioannis Ioakimidis
Behaviour & Information Technology, 36, (10), pp. 1005-1013, 2017 May
[Abstract][BibTex][pdf]

Studying eating behaviours is important in the fields of eating disorders and obesity. However, the current methodologies of quantifying eating behaviour in a real-life setting are lacking, either in reliability (e.g. self-reports) or in scalability. In this descriptive study, we deployed previously evaluated laboratory-based methodologies in a Swedish high school, using the Mandometer®, together with video cameras and a dedicated mobile app in order to record eating behaviours in a sample of 41 students, 16–17 years old. Without disturbing the normal school life, we achieved a 97% data-retention rate, using methods fully accepted by the target population. The overall eating style of the students was similar across genders, with male students eating more than females, during lunches of similar lengths. While both groups took similar number of bites, males took larger bites across the meal. Interestingly, the recorded school lunches were as long as lunches recorded in a laboratory setting, which is characterised by the absence of social interactions and direct access to additional food. In conclusion, a larger scale use of our methods is feasible, but more hypotheses-based studies are needed to fully describe and evaluate the interactions between the school environment and the recorded eating behaviours.

@article{Langlet2017,
author={Billy Langlet and Anna Anvret and Christos Maramis and Ioannis Moulos and Vasileios Papapanagiotou and Christos Diou and Eirini Lekka and Rachel Heimeier and Anastasios Delopoulos and Ioannis Ioakimidis},
title={Objective measures of eating behaviour in a Swedish high school},
journal={Behaviour & Information Technology},
volume={36},
number={10},
pages={1005-1013},
year={2017},
month={05},
date={2017-05-06},
url={https://doi.org/10.1080/0144929X.2017.1322146},
doi={http://10.1080/0144929X.2017.1322146},
abstract={Studying eating behaviours is important in the fields of eating disorders and obesity. However, the current methodologies of quantifying eating behaviour in a real-life setting are lacking, either in reliability (e.g. self-reports) or in scalability. In this descriptive study, we deployed previously evaluated laboratory-based methodologies in a Swedish high school, using the Mandometer®, together with video cameras and a dedicated mobile app in order to record eating behaviours in a sample of 41 students, 16–17 years old. Without disturbing the normal school life, we achieved a 97% data-retention rate, using methods fully accepted by the target population. The overall eating style of the students was similar across genders, with male students eating more than females, during lunches of similar lengths. While both groups took similar number of bites, males took larger bites across the meal. Interestingly, the recorded school lunches were as long as lunches recorded in a laboratory setting, which is characterised by the absence of social interactions and direct access to additional food. In conclusion, a larger scale use of our methods is feasible, but more hypotheses-based studies are needed to fully describe and evaluate the interactions between the school environment and the recorded eating behaviours.}
}

2016

(J)
Vasilis Papapanagiotou, Christos Diou, Lingchuan Zhou, Janet van den Boer, Monica Mars and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics, PP, (99), pp. 1-1, 2016 Jan
[Abstract][BibTex][pdf]

In the context of dietary management, accurate monitoring of eating habits is receiving increased attention. Wearable sensors, combined with the connectivity and processing of modern smart phones, can be used to robustly extract objective, and real-time measurements of human behaviour. In particular, for the task of chewing detection, several approaches based on an in-ear microphone can be found in the literature, while other types of sensors have also been reported, such as strain sensors. In this work, performed in the context of the SPLENDID project, we propose to combine an in-ear microphone with a photoplethysmography (PPG) sensor placed in the ear concha, in a new high accuracy and low sampling rate prototype chewing detection system. We propose a pipeline that initially processes each sensor signal separately, and then fuses both to perform the final detection. Features are extracted from each modality, and support vector machine (SVM) classifiers are used separately to perform snacking detection. Finally, we combine the SVM scores from both signals in a late-fusion scheme, which leads to increased eating detection accuracy. We evaluate the proposed eating monitoring system on a challenging, semi-free living dataset of 14 subjects, that includes more than 60 hours of audio and PPG signal recordings. Results show that fusing the audio and PPG signals significantly improves the effectiveness of eating event detection, achieving accuracy up to 0.938 and class-weighted accuracy up to 0.892.

@article{7736096,
author={Vasilis Papapanagiotou and Christos Diou and Lingchuan Zhou and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={A novel chewing detection system based on PPG, audio and accelerometry},
journal={IEEE Journal of Biomedical and Health Informatics},
volume={PP},
number={99},
pages={1-1},
year={2016},
month={01},
date={2016-01-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2017novel.pdf},
doi={http://10.1109/JBHI.2016.2625271},
keywords={Ear;Informatics;Microphones;Monitoring;Sensor systems;Signal processing algorithms},
abstract={In the context of dietary management, accurate monitoring of eating habits is receiving increased attention. Wearable sensors, combined with the connectivity and processing of modern smart phones, can be used to robustly extract objective, and real-time measurements of human behaviour. In particular, for the task of chewing detection, several approaches based on an in-ear microphone can be found in the literature, while other types of sensors have also been reported, such as strain sensors. In this work, performed in the context of the SPLENDID project, we propose to combine an in-ear microphone with a photoplethysmography (PPG) sensor placed in the ear concha, in a new high accuracy and low sampling rate prototype chewing detection system. We propose a pipeline that initially processes each sensor signal separately, and then fuses both to perform the final detection. Features are extracted from each modality, and support vector machine (SVM) classifiers are used separately to perform snacking detection. Finally, we combine the SVM scores from both signals in a late-fusion scheme, which leads to increased eating detection accuracy. We evaluate the proposed eating monitoring system on a challenging, semi-free living dataset of 14 subjects, that includes more than 60 hours of audio and PPG signal recordings. Results show that fusing the audio and PPG signals significantly improves the effectiveness of eating event detection, achieving accuracy up to 0.938 and class-weighted accuracy up to 0.892.}
}

(J)
Antonios Chrysopoulos, Christos Diou, Andreas L. Symeonidis and Pericles A. Mitkas
"Response modeling of small-scale energy consumers for effective demand response applications"
Electric Power Systems Research, 132, pp. 78-93, 2016 Mar
[Abstract][BibTex][pdf]

Abstract The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.

@article{Chrysopoulos2016Response,
author={Antonios Chrysopoulos and Christos Diou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Response modeling of small-scale energy consumers for effective demand response applications},
journal={Electric Power Systems Research},
volume={132},
pages={78-93},
year={2016},
month={03},
date={2016-03-01},
url={http://www.sciencedirect.com/science/article/pii/S0378779615003223},
doi={http://10.1016/j.epsr.2015.10.026},
abstract={Abstract The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.}
}

(J)
Vasileios Papapanagiotou, Christos Diou and Anastasios Delopoulos
ACM Transactions on Multimedia Computing, Communications, and Applications, 12, (2), 2016 Mar
[Abstract][BibTex][pdf]

This article presents a novel approach to training classifiers for concept detection using tags and a variant of Support Vector Machine that enables the usage of training weights per sample. Combined with an appropriate tag weighting mechanism, more relevant samples play a more important role in the calibration of the final concept-detector model. We propose a complete, automated framework that (i) calculates relevance scores for each image-concept pair based on image tags, (ii) transforms the scores into relevance probabilities and automatically annotates each image according to this probability, (iii) transforms either the relevance scores or the probabilities into appropriate training weights and finally, (iv) incorporates the training weights and the visual features into a Fuzzy Support Vector Machine classifier to build the concept-detector model. The framework can be applied to online public collections, by gathering a large pool of diverse images, and using the calculated probability to select a training set and the associated training weights. To evaluate our argument, we experiment on two large annotated datasets. Experiments highlight the retrieval effectiveness of the proposed approach. Furthermore, experiments with various levels of annotation error show that using weights derived from tags significantly increases the robustness of the resulting concept detectors.

@article{Papapanagiotou2016Improving,
author={Vasileios Papapanagiotou and Christos Diou and Anastasios Delopoulos},
title={Improving Concept-Based Image Retrieval with Training Weights Computed from Tags},
journal={ACM Transactions on Multimedia Computing, Communications, and Applications},
volume={12},
number={2},
year={2016},
month={03},
date={2016-03-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2016improving.pdf},
doi={http://10.1145/2790230},
abstract={This article presents a novel approach to training classifiers for concept detection using tags and a variant of Support Vector Machine that enables the usage of training weights per sample. Combined with an appropriate tag weighting mechanism, more relevant samples play a more important role in the calibration of the final concept-detector model. We propose a complete, automated framework that (i) calculates relevance scores for each image-concept pair based on image tags, (ii) transforms the scores into relevance probabilities and automatically annotates each image according to this probability, (iii) transforms either the relevance scores or the probabilities into appropriate training weights and finally, (iv) incorporates the training weights and the visual features into a Fuzzy Support Vector Machine classifier to build the concept-detector model. The framework can be applied to online public collections, by gathering a large pool of diverse images, and using the calculated probability to select a training set and the associated training weights. To evaluate our argument, we experiment on two large annotated datasets. Experiments highlight the retrieval effectiveness of the proposed approach. Furthermore, experiments with various levels of annotation error show that using weights derived from tags significantly increases the robustness of the resulting concept detectors.}
}

(J)
Ioannis Sarafis, Christos Diou and Anastasios Delopoulos
"Online training of concept detectors for image retrieval using streaming clickthrough data"
Engineering Applications of Artificial Intelligence, 51, pp. 150-162, 2016 Jan
[Abstract][BibTex][pdf]

Clickthrough data from image search engines provide a massive and continuously generated source of user feedback that can be used to model how the search engine users perceive the visual content. Image clickthrough data have been successfully used to build concept detectors without any manual annotation effort, although the generated annotations suffer from labeling errors. Previous research efforts therefore focused on modeling the sample uncertainty in order to improve concept detector effectiveness. In this paper, we study the problem in an online learning setting using streaming clickthrough data where each click is treated seperately when it becomes available; the concept detector model is therefore continuously updated without batch retraining. We argue that sample uncertainty can be incorporated in the online learning setting by exploiting the repetitions of incoming clicks at the classifier level, where these act as an implicit importance weighting mechanism. For online concept detector training we use the LASVM algorithm. The inferred weighting approximates the solution of batch trained concept detectors using weighted SVM variants that are known to achieve improved performance and high robustness to noise compared to the standard SVM. Furthermore, we evaluate methods for selecting negative samples using a small number of candidates sampled locally from the incoming stream of clicks. The selection criteria aim at drastically improving the performance and the convergence speed of the online concept detectors. To validate our arguments we conduct experiments for 30 concepts on the Clickture-Lite dataset. The experimental results demonstrate that: (a) the proposed online approach produces effective and noise resilient concept detectors that can take advantage of streaming clickthrough data and achieve performance that is equivalent to Fuzzy SVM concept detectors with sample weights and 78.6% improved compared to standard SVM concept detectors; and (b) the selection criteria speed up convergence and improve effectiveness compared to random negative sampling even for a small number of available clicks (up to 134% after 100 clicks).

@article{Sarafis2016Online,
author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},
title={Online training of concept detectors for image retrieval using streaming clickthrough data},
journal={Engineering Applications of Artificial Intelligence},
volume={51},
pages={150-162},
year={2016},
month={01},
date={2016-01-29},
url={http://www.sciencedirect.com/science/article/pii/S095219761600021X},
doi={http://dx.doi.org/10.1016/j.engappai.2016.01.017},
keywords={Clickthrough data;Online learning;Image retrieval;Label noise;Fuzzy SVM;LASVM},
abstract={Clickthrough data from image search engines provide a massive and continuously generated source of user feedback that can be used to model how the search engine users perceive the visual content. Image clickthrough data have been successfully used to build concept detectors without any manual annotation effort, although the generated annotations suffer from labeling errors. Previous research efforts therefore focused on modeling the sample uncertainty in order to improve concept detector effectiveness. In this paper, we study the problem in an online learning setting using streaming clickthrough data where each click is treated seperately when it becomes available; the concept detector model is therefore continuously updated without batch retraining. We argue that sample uncertainty can be incorporated in the online learning setting by exploiting the repetitions of incoming clicks at the classifier level, where these act as an implicit importance weighting mechanism. For online concept detector training we use the LASVM algorithm. The inferred weighting approximates the solution of batch trained concept detectors using weighted SVM variants that are known to achieve improved performance and high robustness to noise compared to the standard SVM. Furthermore, we evaluate methods for selecting negative samples using a small number of candidates sampled locally from the incoming stream of clicks. The selection criteria aim at drastically improving the performance and the convergence speed of the online concept detectors. To validate our arguments we conduct experiments for 30 concepts on the Clickture-Lite dataset. The experimental results demonstrate that: (a) the proposed online approach produces effective and noise resilient concept detectors that can take advantage of streaming clickthrough data and achieve performance that is equivalent to Fuzzy SVM concept detectors with sample weights and 78.6% improved compared to standard SVM concept detectors; and (b) the selection criteria speed up convergence and improve effectiveness compared to random negative sampling even for a small number of available clicks (up to 134% after 100 clicks).}
}

2015

(J)
Ioannis Sarafis, Christos Diou and Anastasios Delopoulos
"Building effective SVM concept detectors from clickthrough data for large-scale image retrieval"
International Journal of Multimedia Information Retrieval, 4, (2), pp. 129-142, 2015 Jun
[Abstract][BibTex][pdf]

Clickthrough data is a source of information that can be used for automatically building concept detectors for image retrieval. Previous studies, however, have shown that in many cases the resulting training sets suffer from severe label noise that has a significant impact in the SVM concept detector performance. This paper evaluates and proposes a set of strategies for automatically building effective concept detectors from clickthrough data. These strategies focus on: (1) automatic training set generation; (2) assignment of label confidence weights to the training samples and (3) using these weights at the classifier level to improve concept detector effectiveness. For training set selection and in order to assign weights to individual training samples three Information Retrieval (IR) models are examined: vector space models, BM25 and language models. Three SVM variants that take into account importance at the classifier level are evaluated and compared to the standard SVM: the Fuzzy SVM, the Power SVM, and the Bilateral-weighted Fuzzy SVM. Experiments conducted on the MM Grand Challenge dataset (consisting of 1M images and 82.3M unique clicks) for 40 concepts demonstrate that (1) on average, all weighted SVM variants are more effective than the standard SVM; (2) the vector space model produces the best training sets and best weights; (3) the Bilateral-weighted Fuzzy SVM produces the best results but is very sensitive to weight assignment and (4) the Fuzzy SVM is the most robust training approach for varying levels of label noise.

@article{Sarafis2015Building,
author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},
title={Building effective SVM concept detectors from clickthrough data for large-scale image retrieval},
journal={International Journal of Multimedia Information Retrieval},
volume={4},
number={2},
pages={129-142},
year={2015},
month={06},
date={2015-06-01},
url={http://link.springer.com/article/10.1007/s13735-015-0080-5},
doi={http://10.1007/s13735-015-0080-5},
abstract={Clickthrough data is a source of information that can be used for automatically building concept detectors for image retrieval. Previous studies, however, have shown that in many cases the resulting training sets suffer from severe label noise that has a significant impact in the SVM concept detector performance. This paper evaluates and proposes a set of strategies for automatically building effective concept detectors from clickthrough data. These strategies focus on: (1) automatic training set generation; (2) assignment of label confidence weights to the training samples and (3) using these weights at the classifier level to improve concept detector effectiveness. For training set selection and in order to assign weights to individual training samples three Information Retrieval (IR) models are examined: vector space models, BM25 and language models. Three SVM variants that take into account importance at the classifier level are evaluated and compared to the standard SVM: the Fuzzy SVM, the Power SVM, and the Bilateral-weighted Fuzzy SVM. Experiments conducted on the MM Grand Challenge dataset (consisting of 1M images and 82.3M unique clicks) for 40 concepts demonstrate that (1) on average, all weighted SVM variants are more effective than the standard SVM; (2) the vector space model produces the best training sets and best weights; (3) the Bilateral-weighted Fuzzy SVM produces the best results but is very sensitive to weight assignment and (4) the Fuzzy SVM is the most robust training approach for varying levels of label noise.}
}

2014

(J)
Niki Aifanti and Anastasios Delopoulos
"Linear subspaces for facial expression recognition"
Signal Processing: Image Communication, 29, (1), pp. 177-188, 2014 Jan
[Abstract][BibTex][pdf]

This paper presents a method for the recognition of the six basic facial expressions in images or in image sequences using landmark points. The proposed technique relies on the observation that the vectors formed by the landmark point coordinates belong to a different manifold for each of the expressions. In addition experimental measurements validate the hypothesis that each of these manifolds can be decomposed to a small number of linear subspaces of very low dimension. This yields a parameterization of the manifolds that allows for computing the distance of a feature vector from each subspace and consequently from each one of the six manifolds. Two alternative classifiers are next proposed that use the corresponding distances as input: the first one is based on the minimum distance from the manifolds, while the second one uses \\{SVMs\\ that are trained with the vector of all distances from each subspace. The proposed technique is tested for two scenarios, the subject-independent and the subject-dependent one. Extensive experiments for each scenario have been performed on two publicly available datasets yielding very satisfactory expression recognition accuracy.

@article{Aifanti2014Linear,
author={Niki Aifanti and Anastasios Delopoulos},
title={Linear subspaces for facial expression recognition},
journal={Signal Processing: Image Communication},
volume={29},
number={1},
pages={177-188},
year={2014},
month={01},
date={2014-01-01},
url={http://www.sciencedirect.com/science/article/pii/S0923596513001641},
doi={http://10.1016/j.image.2013.10.004},
abstract={This paper presents a method for the recognition of the six basic facial expressions in images or in image sequences using landmark points. The proposed technique relies on the observation that the vectors formed by the landmark point coordinates belong to a different manifold for each of the expressions. In addition experimental measurements validate the hypothesis that each of these manifolds can be decomposed to a small number of linear subspaces of very low dimension. This yields a parameterization of the manifolds that allows for computing the distance of a feature vector from each subspace and consequently from each one of the six manifolds. Two alternative classifiers are next proposed that use the corresponding distances as input: the first one is based on the minimum distance from the manifolds, while the second one uses \\\\{SVMs\\\\ that are trained with the vector of all distances from each subspace. The proposed technique is tested for two scenarios, the subject-independent and the subject-dependent one. Extensive experiments for each scenario have been performed on two publicly available datasets yielding very satisfactory expression recognition accuracy.}
}

(J)
A. Chrysopoulos, C. Diou, A.L. Symeonidis and P.A. Mitkas
"Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications"
Engineering Applications of Artificial Intelligence, 35, pp. 299-315, 2014 Sep
[Abstract][BibTex][pdf]

In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.

@article{Chrysopoulos2014Bottom,
author={A. Chrysopoulos and C. Diou and A.L. Symeonidis and P.A. Mitkas},
title={Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications},
journal={Engineering Applications of Artificial Intelligence},
volume={35},
pages={299-315},
year={2014},
month={09},
date={2014-09-01},
url={http://www.sciencedirect.com/science/article/pii/S0952197614001377},
doi={http://10.1016/j.engappai.2014.06.015},
abstract={In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.}
}

(J)
G. Mamalakis, C. Diou, A.L. Symeonidis and L. Georgiadis
"Of daemons and men: A file system approach towards intrusion detection"
Applied Soft Computing, 25, pp. 1-14, 2014 Dec
[Abstract][BibTex][pdf]

We present \\\\{FI2DS\\\\ a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific \\\\{BSM\\\\ audit records, as well as an \\\\{IDS\\\\ that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since \\\\{FI2DS\\\\ detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1× 10?2 % and 9.3× 10?4 %. Comparison of \\\\{FI2DS\\\\ to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed \\\\{IDS\\\\ in all three datasets. Within the context of this paper \\\\{FI2DS\\\\ is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.

@article{Mamalakis2014Daemons,
author={G. Mamalakis and C. Diou and A.L. Symeonidis and L. Georgiadis},
title={Of daemons and men: A file system approach towards intrusion detection},
journal={Applied Soft Computing},
volume={25},
pages={1-14},
year={2014},
month={12},
date={2014-12-01},
url={http://www.sciencedirect.com/science/article/pii/S1568494614004311},
doi={http://10.1016/j.asoc.2014.07.026},
abstract={We present \\\\\\\\{FI2DS\\\\\\\\ a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific \\\\\\\\{BSM\\\\\\\\ audit records, as well as an \\\\\\\\{IDS\\\\\\\\ that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since \\\\\\\\{FI2DS\\\\\\\\ detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1× 10?2 % and 9.3× 10?4 %. Comparison of \\\\\\\\{FI2DS\\\\\\\\ to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed \\\\\\\\{IDS\\\\\\\\ in all three datasets. Within the context of this paper \\\\\\\\{FI2DS\\\\\\\\ is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.}
}

(J)
Christos Papachristou and Anastasios Delopoulos
"A method for the evaluation of projective geometric consistency in weakly calibrated stereo with application to point matching"
Computer Vision and Image Understanding, 119, pp. 81-101, 2014 Feb
[Abstract][BibTex][pdf]

We present a novel method that evaluates the geometric consistency of putative point matches in weakly calibrated settings, i.e. when the epipolar geometry but not the camera calibration is known, using only the point coordinates as information. The main idea behind our approach is the fact that each point correspondence in our data belongs to one of two classes (inliers/outlier). The classification of each point match relies on the histogram of a quantity representing the difference between cross ratios derived from a construction involving 6-tuples of point matches. Neither constraints nor scenario dependent parameters/thresholds are needed. Even for few candidate point matches the ensemble of 6-tuples containing each of them turns to provide statistically reliable histograms that prove to discriminate between inliers and outliers. In fact, in most cases a random sampling among this population is sufficient. Nevertheless, the accuracy of the method is positively correlated to its sampling density leading to an accuracy versus resulting computational complexity trade-off. Theoretical analysis and experiments are given that show the consistent performance of the proposed classification method when applied in inlier/outlier discrimination. The achieved accuracy is favourably evaluated against established methods that employ geometric only information, i.e. those relying on the Sampson, the algebraic and the symmetric epipolar distances. Finally, we also present an application of our scheme in uncalibrated stereo inside a \\\\{RANSAC\\\\ framework and compare it to the same as above methods.

@article{Papachristou2014Method,
author={Christos Papachristou and Anastasios Delopoulos},
title={A method for the evaluation of projective geometric consistency in weakly calibrated stereo with application to point matching},
journal={Computer Vision and Image Understanding},
volume={119},
pages={81-101},
year={2014},
month={02},
date={2014-02-01},
url={http://www.sciencedirect.com/science/article/pii/S107731421300235X},
doi={http://10.1016/j.cviu.2013.12.004},
abstract={We present a novel method that evaluates the geometric consistency of putative point matches in weakly calibrated settings, i.e. when the epipolar geometry but not the camera calibration is known, using only the point coordinates as information. The main idea behind our approach is the fact that each point correspondence in our data belongs to one of two classes (inliers/outlier). The classification of each point match relies on the histogram of a quantity representing the difference between cross ratios derived from a construction involving 6-tuples of point matches. Neither constraints nor scenario dependent parameters/thresholds are needed. Even for few candidate point matches the ensemble of 6-tuples containing each of them turns to provide statistically reliable histograms that prove to discriminate between inliers and outliers. In fact, in most cases a random sampling among this population is sufficient. Nevertheless, the accuracy of the method is positively correlated to its sampling density leading to an accuracy versus resulting computational complexity trade-off. Theoretical analysis and experiments are given that show the consistent performance of the proposed classification method when applied in inlier/outlier discrimination. The achieved accuracy is favourably evaluated against established methods that employ geometric only information, i.e. those relying on the Sampson, the algebraic and the symmetric epipolar distances. Finally, we also present an application of our scheme in uncalibrated stereo inside a \\\\\\\\{RANSAC\\\\\\\\ framework and compare it to the same as above methods.}
}

2013

(J)
Nikolaos Dimitriou and Anastasios Delopoulos
"Motion-based segmentation of objects using overlapping temporal windows"
Image and Vision Computing, 31, (9), pp. 593-602, 2013 Sep
[Abstract][BibTex][pdf]

Motion segmentation refers to the problem of separating the objects in a video sequence according to their motion. It is a fundamental problem of computer vision, since various systems focusing on the analysis of dynamic scenes include motion segmentation algorithms. In this paper we present a novel approach, where a video shot is temporally divided in successive and overlapping windows and motion segmentation is performed on each window respectively. This attribute renders the algorithm suitable even for long video sequences. In the last stage of the algorithm the segmentation results for every window are aggregated into a final segmentation. The presented algorithm can handle effectively asynchronous trajectories on each window even when they have no temporal intersection. The evaluation of the proposed algorithm on the Berkeley motion segmentation benchmark demonstrates its scalability and accuracy compared to the state of the art.

@article{Dimitriou2013Motion,
author={Nikolaos Dimitriou and Anastasios Delopoulos},
title={Motion-based segmentation of objects using overlapping temporal windows},
journal={Image and Vision Computing},
volume={31},
number={9},
pages={593-602},
year={2013},
month={09},
date={2013-09-01},
url={http://www.sciencedirect.com/science/article/pii/S0262885613000929},
doi={http://10.1016/j.imavis.2013.06.005},
abstract={Motion segmentation refers to the problem of separating the objects in a video sequence according to their motion. It is a fundamental problem of computer vision, since various systems focusing on the analysis of dynamic scenes include motion segmentation algorithms. In this paper we present a novel approach, where a video shot is temporally divided in successive and overlapping windows and motion segmentation is performed on each window respectively. This attribute renders the algorithm suitable even for long video sequences. In the last stage of the algorithm the segmentation results for every window are aggregated into a final segmentation. The presented algorithm can handle effectively asynchronous trajectories on each window even when they have no temporal intersection. The evaluation of the proposed algorithm on the Berkeley motion segmentation benchmark demonstrates its scalability and accuracy compared to the state of the art.}
}

(J)
Christos Maramis, Manolis Falelakis, Irini Lekka, Christos Diou, Pericles Mitkas and Anastasios Delopoulos
"Applying semantic technologies in cervical cancer research"
Data & Knowledge Engineering, 86, pp. 160-178, 2013 Jul
[Abstract][BibTex][pdf]

In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named \\{ASSIST\\ and developed as an \\{EU\\ research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case–control association studies, which is the ultimate goal of the system.

@article{Maramis2013Applying,
author={Christos Maramis and Manolis Falelakis and Irini Lekka and Christos Diou and Pericles Mitkas and Anastasios Delopoulos},
title={Applying semantic technologies in cervical cancer research},
journal={Data & Knowledge Engineering},
volume={86},
pages={160-178},
year={2013},
month={07},
date={2013-07-01},
url={http://www.sciencedirect.com/science/article/pii/S0169023X13000220},
doi={http://10.1016/j.datak.2013.02.003},
abstract={In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named \\\\{ASSIST\\\\ and developed as an \\\\{EU\\\\ research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case–control association studies, which is the ultimate goal of the system.}
}

2011

(J)
Christos Maramis, Anastasios Delopoulos and Alexandros Lambropoulos
"A Computerized Methodology for Improved Virus Typing by PCR-RFLP Gel Electrophoresis"
IEEE Transactions on Biomedical Engineering, 58, (8), pp. 2339-2351, 2011 Aug
[Abstract][BibTex][pdf]

The analysis of digitized images from polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) gel electrophoresis examinations is a popular method for virus typing, i.e., for identifying the virus type(s) that have infected an investigated biological sample. However, being mostly manual, the conventional virus typing protocol remains laborious, time consuming, and error prone. In order to overcome these shortcomings, we propose a computerized methodology for improving virus typing via PCR-RFLP gel electrophoresis. A novel realistic observation model of the viral DNA motion on the gel matrix is employed to assist in exploiting additional virus-related information in comparison to the conventional approaches. The extracted rich information is fed to a novel typing algorithm, resulting in faster and more accurate decisions. The proposed methodology is evaluated for the case of the human papillomavirus typing on a dataset of 80 real and 1500 simulated samples, producing very satisfactory results.

@article{Maramis2011Computerized,
author={Christos Maramis and Anastasios Delopoulos and Alexandros Lambropoulos},
title={A Computerized Methodology for Improved Virus Typing by PCR-RFLP Gel Electrophoresis},
journal={IEEE Transactions on Biomedical Engineering},
volume={58},
number={8},
pages={2339-2351},
year={2011},
month={08},
date={2011-08-01},
url={http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5765663},
doi={http://10.1109/TBME.2011.2153202},
abstract={The analysis of digitized images from polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) gel electrophoresis examinations is a popular method for virus typing, i.e., for identifying the virus type(s) that have infected an investigated biological sample. However, being mostly manual, the conventional virus typing protocol remains laborious, time consuming, and error prone. In order to overcome these shortcomings, we propose a computerized methodology for improving virus typing via PCR-RFLP gel electrophoresis. A novel realistic observation model of the viral DNA motion on the gel matrix is employed to assist in exploiting additional virus-related information in comparison to the conventional approaches. The extracted rich information is fed to a novel typing algorithm, resulting in faster and more accurate decisions. The proposed methodology is evaluated for the case of the human papillomavirus typing on a dataset of 80 real and 1500 simulated samples, producing very satisfactory results.}
}

(J)
Christos Maramis and Anastasios Delopoulos
"A Novel Algorithm for Restricting the Complexity of Virus Typing via PCR-RFLP Gel Electrophoresis"
Biomedical Engineering Letters, 1, (4), pp. 239-246, 2011 Nov
[Abstract][BibTex][pdf]

PCR-RFLP gel electrophoresis is a popular method for virus typing (i.e., for identifying the types of a virus that have infected a biological sample), which has been automated recently owing to a computerized typing methodology. However, even with the help of this methodology, the PCRRFLP method suffers from low throughput, when compared to other typing methods. In this paper, we tackle this issue by introducing a novel algorithm for conducting the most computationally demanding phase of the aforementioned typing methodology (testing phase).

@article{Maramis2011Novel,
author={Christos Maramis and Anastasios Delopoulos},
title={A Novel Algorithm for Restricting the Complexity of Virus Typing via PCR-RFLP Gel Electrophoresis},
journal={Biomedical Engineering Letters},
volume={1},
number={4},
pages={239-246},
year={2011},
month={11},
date={2011-11-20},
url={http://dx.doi.org/10.1007/s13534-011-0038-3},
doi={http://10.1007/s13534-011-0038-3},
abstract={PCR-RFLP gel electrophoresis is a popular method for virus typing (i.e., for identifying the types of a virus that have infected a biological sample), which has been automated recently owing to a computerized typing methodology. However, even with the help of this methodology, the PCRRFLP method suffers from low throughput, when compared to other typing methods. In this paper, we tackle this issue by introducing a novel algorithm for conducting the most computationally demanding phase of the aforementioned typing methodology (testing phase).}
}

2010

(J)
Christos Diou, George Stephanopoulos, Panagiotis Panagiotopoulos, Christos Papachristou, Nikos Dimitriou and Anastasios Delopoulos
"Large-Scale Concept Detection in Multimedia Data Using Small Training Sets and Cross-Domain Concept Fusion"
IEEE Transactions on Circuits and Systems for Video Technology, 20, (12), pp. 1808 - 1821, 2010 Oct
[Abstract][BibTex][pdf]

This paper presents the concept detector module developed for the VITALAS multimedia retrieval system. It outlines its architecture and major implementation aspects, including a set of procedures and tools that were used for the development of detectors for more than 500 concepts. The focus is on aspects that increase the system\'s scalability in terms of the number of concepts: collaborative concept definition and disambiguation, selection of small but sufficient training sets and efficient manual annotation. The proposed architecture uses cross-domain concept fusion to improve effectiveness and reduce the number of samples required for concept detector training. Two criteria are proposed for selecting the best predictors to use for fusion and their effectiveness is experimentally evaluated for 221 concepts on the TRECVID-2005 development set and 132 concepts on a set of images provided by the Belga news agency. In these experiments, cross-domain concept fusion performed better than early fusion for most concepts. Experiments with variable training set sizes also indicate that cross-domain concept fusion is more effective than early fusion when the training set size is small.

@article{Diou2011Large,
author={Christos Diou and George Stephanopoulos and Panagiotis Panagiotopoulos and Christos Papachristou and Nikos Dimitriou and Anastasios Delopoulos},
title={Large-Scale Concept Detection in Multimedia Data Using Small Training Sets and Cross-Domain Concept Fusion},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
volume={20},
number={12},
pages={1808 - 1821},
year={2010},
month={10},
date={2010-10-18},
url={http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5604666},
doi={http://10.1109/TCSVT.2010.2087814},
abstract={This paper presents the concept detector module developed for the VITALAS multimedia retrieval system. It outlines its architecture and major implementation aspects, including a set of procedures and tools that were used for the development of detectors for more than 500 concepts. The focus is on aspects that increase the system\\'s scalability in terms of the number of concepts: collaborative concept definition and disambiguation, selection of small but sufficient training sets and efficient manual annotation. The proposed architecture uses cross-domain concept fusion to improve effectiveness and reduce the number of samples required for concept detector training. Two criteria are proposed for selecting the best predictors to use for fusion and their effectiveness is experimentally evaluated for 221 concepts on the TRECVID-2005 development set and 132 concepts on a set of images provided by the Belga news agency. In these experiments, cross-domain concept fusion performed better than early fusion for most concepts. Experiments with variable training set sizes also indicate that cross-domain concept fusion is more effective than early fusion when the training set size is small.}
}

(J)
Theodora Tsikrika, Christos Diou, Arjen P. de Vries and Anastasios Delopoulos
"Reliability and effectiveness of clickthrough data for automatic image annotation"
Multimedia Tools and Applications, 55, (1), pp. 27-52, 2010 Aug
[Abstract][BibTex][pdf]

Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manual annotation effort. We investigate and evaluate this approach using a collection of 97,628 photographic images. The results indicate that the contribution of search log based training data is positive despite their inherent noise; in particular, the combination of manual and automatically generated training data outperforms the use of manual data alone. It is therefore possible to use clickthrough data to perform large-scale image annotation with little manual annotation effort or, depending on performance, using only the automatically generated training data. An extensive presentation of the experimental results and the accompanying data can be accessed at http://olympus.ee.auth.gr/{\\textasciitildediou/civr2009/.

@article{Tsikrika2011Reliability,
author={Theodora Tsikrika and Christos Diou and Arjen P. de Vries and Anastasios Delopoulos},
title={Reliability and effectiveness of clickthrough data for automatic image annotation},
journal={Multimedia Tools and Applications},
volume={55},
number={1},
pages={27-52},
year={2010},
month={08},
date={2010-08-17},
url={http://dx.doi.org/10.1007/s11042-010-0584-1},
doi={http://10.1007/s11042-010-0584-1},
abstract={Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manual annotation effort. We investigate and evaluate this approach using a collection of 97,628 photographic images. The results indicate that the contribution of search log based training data is positive despite their inherent noise; in particular, the combination of manual and automatically generated training data outperforms the use of manual data alone. It is therefore possible to use clickthrough data to perform large-scale image annotation with little manual annotation effort or, depending on performance, using only the automatically generated training data. An extensive presentation of the experimental results and the accompanying data can be accessed at http://olympus.ee.auth.gr/{\\\\textasciitildediou/civr2009/.}
}

2009

(J)
Theodoros Agorastos, Vassilis Koutkias, Manolis Falelakis, Irini Lekka, Themistoklis Mikos, Anastasios Delopoulos, Pericles Mitkas, Antonios Tantsis, Steven Weyers, Pascal Coorevits, Andreas Kaufmann, Roberto Kurzeja and Nicos Maglaveras
"Semantic Integration of Cervical Cancer Data Repositories to Facilitate Multicenter Association Studies: The ASSIST Approach"
Cancer Informatics, 8, pp. 31-31, 2009 Jan
[Abstract][BibTex][pdf]

The current work addresses the unification of Electronic Health Records related to cervical cancer into a single medical knowledge source, in the context of the EU-funded ASSIST research project. The project aims to facilitate the research for cervical precancer and cancer through a system that virtually unifies multiple patient record repositories, physically located in different medical centers/hospitals, thus, increasing flexibility by allowing the formation of study groups \"on demand\" and by recycling patient records in new studies. To this end, ASSIST uses semantic technologies to translate all medical entities (such as patient examination results, history, habits, genetic profile) and represent them in a common form, encoded in the ASSIST Cervical Cancer Ontology. The current paper presents the knowledge elicitation approach followed, towards the definition and representation of the disease\'s medical concepts and rules that constitute the basis for the ASSIST Cervical Cancer Ontology. The proposed approach constitutes a paradigm for semantic integration of heterogeneous clinical data that may be applicable to other biomedical application domains.

@article{Agorastos2009Semantic,
author={Theodoros Agorastos and Vassilis Koutkias and Manolis Falelakis and Irini Lekka and Themistoklis Mikos and Anastasios Delopoulos and Pericles Mitkas and Antonios Tantsis and Steven Weyers and Pascal Coorevits and Andreas Kaufmann and Roberto Kurzeja and Nicos Maglaveras},
title={Semantic Integration of Cervical Cancer Data Repositories to Facilitate Multicenter Association Studies: The ASSIST Approach},
journal={Cancer Informatics},
volume={8},
pages={31-31},
year={2009},
month={01},
date={2009-01-01},
url={http://search.proquest.com/docview/1038326414?accountid=8359},
abstract={The current work addresses the unification of Electronic Health Records related to cervical cancer into a single medical knowledge source, in the context of the EU-funded ASSIST research project. The project aims to facilitate the research for cervical precancer and cancer through a system that virtually unifies multiple patient record repositories, physically located in different medical centers/hospitals, thus, increasing flexibility by allowing the formation of study groups \\"on demand\\" and by recycling patient records in new studies. To this end, ASSIST uses semantic technologies to translate all medical entities (such as patient examination results, history, habits, genetic profile) and represent them in a common form, encoded in the ASSIST Cervical Cancer Ontology. The current paper presents the knowledge elicitation approach followed, towards the definition and representation of the disease\\'s medical concepts and rules that constitute the basis for the ASSIST Cervical Cancer Ontology. The proposed approach constitutes a paradigm for semantic integration of heterogeneous clinical data that may be applicable to other biomedical application domains.}
}

2008

(J)
Manolis Falelakis, Christos Diou and Anastasios Delopoulos
"Complexity control in semantic identification"
International Journal of Intelligent Systems Technologies and Applications, 1, (3/4), pp. 247-262, 2008 Jan
[Abstract][BibTex][pdf]

This work introduces an efficient scheme for identifying semantic entities within multimedia data sets, providing mechanisms for modelling the trade-off between the accuracy of the result and the entailed computational cost. Semantic entities are described through formal definitions based on lower-level semantic and/or syntactic features. Based on appropriate metrics, the paper presents a methodology for selecting optimal subsets of syntactic features to extract, so that satisfactory results are obtained, while complexity remains below some required limit.

@article{Falelakis2006Complexity,
author={Manolis Falelakis and Christos Diou and Anastasios Delopoulos},
title={Complexity control in semantic identification},
journal={International Journal of Intelligent Systems Technologies and Applications},
volume={1},
number={3/4},
pages={247-262},
year={2008},
month={01},
date={2008-01-04},
url={http://dx.doi.org/10.1504/IJISTA.2006.009907},
doi={http://10.1504/IJISTA.2006.009907},
abstract={This work introduces an efficient scheme for identifying semantic entities within multimedia data sets, providing mechanisms for modelling the trade-off between the accuracy of the result and the entailed computational cost. Semantic entities are described through formal definitions based on lower-level semantic and/or syntactic features. Based on appropriate metrics, the paper presents a methodology for selecting optimal subsets of syntactic features to extract, so that satisfactory results are obtained, while complexity remains below some required limit.}
}

2007

(J)
Anatasios Delopoulos, Levon Sukissian and Stefanos Kollias
"An efficient multiresolution texture classification scheme using neural networks"
International Journal of Computer Mathematics, 67, (1-2), pp. 155-168, 2007 Mar
[Abstract][BibTex][pdf]

An efficient multiresolution texture classification method is proposed in this paper, based on 2-D linear prediction, multiresolution decomposition and artificial neural networks. A multiresolution spectral analysis of textured images is first developed, which permits 2-D AR texture modelling to be performed in multiple resolutions. Recursive estimation algorithms combined witth the Itakura distance measure provide sets of AR model parameters representing different textures at various resolutions. Appropriate neural network banks are constructed and trained being then able to effectively perform classification of textures irrespective of their resolution level. Results are presented using real textured images which illustrate the good performance of the proposed approach.

@article{Delopoulos2007Efficient,
author={Anatasios Delopoulos and Levon Sukissian and Stefanos Kollias},
title={An efficient multiresolution texture classification scheme using neural networks},
journal={International Journal of Computer Mathematics},
volume={67},
number={1-2},
pages={155-168},
year={2007},
month={03},
date={2007-03-20},
url={http://mug.ee.auth.gr/wp-content/uploads/00207169808804657.pdf},
doi={http://10.1080/00207169808804657},
abstract={An efficient multiresolution texture classification method is proposed in this paper, based on 2-D linear prediction, multiresolution decomposition and artificial neural networks. A multiresolution spectral analysis of textured images is first developed, which permits 2-D AR texture modelling to be performed in multiple resolutions. Recursive estimation algorithms combined witth the Itakura distance measure provide sets of AR model parameters representing different textures at various resolutions. Appropriate neural network banks are constructed and trained being then able to effectively perform classification of textures irrespective of their resolution level. Results are presented using real textured images which illustrate the good performance of the proposed approach.}
}

2006

(J)
Manolis Falelakis, Christos Diou and Anastasios Delopoulos
"Semantic identification: balancing between complexity and validity"
EURASIP Journal on Applied Signal Processing, pp. 183-183, 2006 Jan
[Abstract][BibTex][pdf]

An efficient scheme for identifying semantic entities within data sets such as multimedia documents, scenes, signals, and so forth, is proposed in this work. Expression of semantic entities in terms of syntactic properties is modelled with appropriately defined finite automata, which also model the identification procedure. Based on the structure and properties of these automata, formal definitions of attained validity and certainty and also required complexity are defined as metrics of identification efficiency. The main contribution of the paper relies on organizing the identification and search procedure in a way that maximizes its validity for bounded complexity budgets and reversely minimizes computational complexity for a given required validity threshold. The associated optimization problem is solved by using dynamic programming. Finally, a set of experiments provides insight to the introduced theoretical framework.

@article{Falelakis2006Semantic,
author={Manolis Falelakis and Christos Diou and Anastasios Delopoulos},
title={Semantic identification: balancing between complexity and validity},
journal={EURASIP Journal on Applied Signal Processing},
pages={183-183},
year={2006},
month={01},
date={2006-01-01},
url={http://dx.doi.org/10.1155/ASP/2006/41716},
doi={http://10.1155/ASP/2006/41716},
abstract={An efficient scheme for identifying semantic entities within data sets such as multimedia documents, scenes, signals, and so forth, is proposed in this work. Expression of semantic entities in terms of syntactic properties is modelled with appropriately defined finite automata, which also model the identification procedure. Based on the structure and properties of these automata, formal definitions of attained validity and certainty and also required complexity are defined as metrics of identification efficiency. The main contribution of the paper relies on organizing the identification and search procedure in a way that maximizes its validity for bounded complexity budgets and reversely minimizes computational complexity for a given required validity threshold. The associated optimization problem is solved by using dynamic programming. Finally, a set of experiments provides insight to the introduced theoretical framework.}
}

(J)
M. Wallace, T. Athanasiadis, Y. Avrithis, A. N. Delopoulos and S. Kollias
"Integrating multimedia archives: the architecture and the content layer"
IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 36, (1), pp. 34-52, 2006 Jan
[Abstract][BibTex][pdf]

In the last few years, numerous multimedia archives have made extensive use of digitized storage and annotation technologies. Still, the development of single points of access, providing common and uniform access to their data, despite the efforts and accomplishments of standardization organizations, has remained an open issue as it involves the integration of various large-scale heterogeneous and heterolingual systems. This paper describes a mediator system that achieves architectural integration through an extended three-tier architecture and content integration through semantic modeling. The described system has successfully integrated five multimedia archives, quite different in nature and content from each other, while also providing easy and scalable inclusion of more archives in the future.

@article{Wallace2006Integrating,
author={M. Wallace and T. Athanasiadis and Y. Avrithis and A. N. Delopoulos and S. Kollias},
title={Integrating multimedia archives: the architecture and the content layer},
journal={IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans},
volume={36},
number={1},
pages={34-52},
year={2006},
month={01},
date={2006-01-01},
url={http://dx.doi.org/10.1109/TSMCA.2005.859184},
doi={http://10.1109/TSMCA.2005.859184},
abstract={In the last few years, numerous multimedia archives have made extensive use of digitized storage and annotation technologies. Still, the development of single points of access, providing common and uniform access to their data, despite the efforts and accomplishments of standardization organizations, has remained an open issue as it involves the integration of various large-scale heterogeneous and heterolingual systems. This paper describes a mediator system that achieves architectural integration through an extended three-tier architecture and content integration through semantic modeling. The described system has successfully integrated five multimedia archives, quite different in nature and content from each other, while also providing easy and scalable inclusion of more archives in the future.}
}

2003

(J)
C. Diou, Karwatka and Jacek
"Some methods of identification high clutter regions in radar tracking system"
Postepy Radiotechniki, 48, (147), pp. 3-15, 2003 Jan
[Abstract][BibTex]

@article{Diou2003Some,
author={C. Diou and Karwatka and Jacek},
title={Some methods of identification high clutter regions in radar tracking system},
journal={Postepy Radiotechniki},
volume={48},
number={147},
pages={3-15},
year={2003},
month={01},
date={2003-01-01}
}

2001

(J)
Anastasios Delopoulos, Stephanos Kollias, Yiannis Avrithis, W. Haas and K. Majcen
"Unified Intelligent Access to Heterogeneous Audiovisual Content"
Content-Based Multimedia Indexing, 2001 Sep
[Abstract][BibTex][pdf]

Content-based audiovisual data retrieval utilizing new emerging related standards such as MPEG-7 will yield ineffective results, unless major focus is given to the semantic information level. Mapping of low level, sub-symbolic descriptors of a/v archives to high level symbolic ones is in general difficult, even impossible with the current state of technology. It can, however, be tackled when dealing with specific application domains. It seems that the extraction of semantic information from a/v and text related data is tractable taking into account the nature of useful queries that users may issue. And the context determined by user profile. The European IST project FAETHON is developing a novel platform, that intends to exploit the aforementioned ideas in order to offer user friendly, highly informative access to distributed audiovisual archives.

@article{Delopoulos2001Unified,
author={Anastasios Delopoulos and Stephanos Kollias and Yiannis Avrithis and W. Haas and K. Majcen},
title={Unified Intelligent Access to Heterogeneous Audiovisual Content},
journal={Content-Based Multimedia Indexing},
year={2001},
month={09},
date={2001-09-01},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/img68.pdf},
abstract={Content-based audiovisual data retrieval utilizing new emerging related standards such as MPEG-7 will yield ineffective results, unless major focus is given to the semantic information level. Mapping of low level, sub-symbolic descriptors of a/v archives to high level symbolic ones is in general difficult, even impossible with the current state of technology. It can, however, be tackled when dealing with specific application domains. It seems that the extraction of semantic information from a/v and text related data is tractable taking into account the nature of useful queries that users may issue. And the context determined by user profile. The European IST project FAETHON is developing a novel platform, that intends to exploit the aforementioned ideas in order to offer user friendly, highly informative access to distributed audiovisual archives.}
}

(J)
Christos Papachristou and Fotini-Niovi Pavlidou
"Collision-Free Operation in Ad Hoc Carrier Sense Multiple Access Wireless Networks"
IEEE Communications Letters, 6, (8), pp. 352-354, 2001 Aug
[Abstract][BibTex][pdf]

IEEE standards;carrier sense multiple access;packet radio networks;telecommunication standards;CSMA/CA algorithm;IEEE 802.11 standard packets;RTS/CTS packets;ad hoc carrier sense multiple access wireless networks;busy energy bursts;collision-free operation;energy bursts packet delays;system loads;system performance;Ad hoc networks;Delay;Intelligent networks;Multiaccess communication;Road accidents;Sections;System performance;Telecommunication traffic;Wireless LAN;Wireless networks.

@article{Papachristou,
author={Christos Papachristou and Fotini-Niovi Pavlidou},
title={Collision-Free Operation in Ad Hoc Carrier Sense Multiple Access Wireless Networks},
journal={IEEE Communications Letters},
volume={6},
number={8},
pages={352-354},
year={2001},
month={08},
date={2001-08-01},
url={http://dx.doi.org/10.1109/LCOMM.2002.802036},
doi={http://10.1109/LCOMM.2002.802036},
abstract={IEEE standards;carrier sense multiple access;packet radio networks;telecommunication standards;CSMA/CA algorithm;IEEE 802.11 standard packets;RTS/CTS packets;ad hoc carrier sense multiple access wireless networks;busy energy bursts;collision-free operation;energy bursts packet delays;system loads;system performance;Ad hoc networks;Delay;Intelligent networks;Multiaccess communication;Road accidents;Sections;System performance;Telecommunication traffic;Wireless LAN;Wireless networks.}
}

(J)
Yiannis S. Xirouhakis, Athanasios I. Drosopoulos and Anastasios N. Delopoulos
"Efficient optical camera tracking in virtual sets"
IEEE Transactions on Image Processing, 10, (4), pp. 609-622, 2001 Apr
[Abstract][BibTex][pdf]

Optical tracking systems have become particularly popular in virtual studios applications tending to substitute electromechanical ones. However, optical systems are reported to be inferior in terms of accuracy in camera motion estimation. Moreover, marker-based approaches often cause problems in image/video compositing and impose undesirable constraints on camera movement, present work introduces a novel methodology for the construction of a two-tone blue screen, which allows the localization of camera in three-dimensional (3-D) space on the basis of the captured sequence. At the same time, a novel algorithm is presented for the extraction of camera\\\'s 3-D motion parameters based on 3-D-to-two-dimensional (2-D) line correspondences. Simulated experiments have been included to illustrate the performance of the proposed system.

@article{Xirouhakis2001Efficient,
author={Yiannis S. Xirouhakis and Athanasios I. Drosopoulos and Anastasios N. Delopoulos},
title={Efficient optical camera tracking in virtual sets},
journal={IEEE Transactions on Image Processing},
volume={10},
number={4},
pages={609-622},
year={2001},
month={04},
date={2001-04-01},
url={http://dx.doi.org/10.1109/83.913595},
doi={http://10.1109/83.913595},
abstract={Optical tracking systems have become particularly popular in virtual studios applications tending to substitute electromechanical ones. However, optical systems are reported to be inferior in terms of accuracy in camera motion estimation. Moreover, marker-based approaches often cause problems in image/video compositing and impose undesirable constraints on camera movement, present work introduces a novel methodology for the construction of a two-tone blue screen, which allows the localization of camera in three-dimensional (3-D) space on the basis of the captured sequence. At the same time, a novel algorithm is presented for the extraction of camera\\\\\\'s 3-D motion parameters based on 3-D-to-two-dimensional (2-D) line correspondences. Simulated experiments have been included to illustrate the performance of the proposed system.}
}

2000

(J)
Yiannis Xirouhakis and Anastasios Delopoulos
"A Comparative Study on 3D Motion Estimation under Orthography"
Nordic signal processing symposium, 2000 Jun
[Abstract][BibTex][pdf]

In the present work, the algorithm proposed in [8,10] is tested against existing approaches on 3D motion and structure estimation of rigid objects under orthography. The theoretical relation between the proposed approach and the well-known factorization and epipolar methods is discussed. At the same time, comparative simulated experiments are given, illustrating the performance of the three algorithms (the factorization, the epipolar and the proposed one). The proposed algorithm seems to be more genericthan the existing approaches, and provides superior estimates of 3D motion in most cases.

@article{Xirouhakis2000Comparative,
author={Yiannis Xirouhakis and Anastasios Delopoulos},
title={A Comparative Study on 3D Motion Estimation under Orthography},
journal={Nordic signal processing symposium},
year={2000},
month={06},
date={2000-06-01},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/page057_id151.pdf},
abstract={In the present work, the algorithm proposed in [8,10] is tested against existing approaches on 3D motion and structure estimation of rigid objects under orthography. The theoretical relation between the proposed approach and the well-known factorization and epipolar methods is discussed. At the same time, comparative simulated experiments are given, illustrating the performance of the three algorithms (the factorization, the epipolar and the proposed one). The proposed algorithm seems to be more genericthan the existing approaches, and provides superior estimates of 3D motion in most cases.}
}

(J)
Yiannis Xirouhakis and Anastasios Delopoulos
"Least Squares Estimation of 3D Shape and Motion of Rigid Objects from their Orthographic Projections"
IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, (4), pp. 393-399, 2000 Apr
[Abstract][BibTex][pdf]

The extraction of motion and shape information of three-dimensional objects from their two-dimensional projections is a task that emerges in various applications such as computer vision, biomedical engineering, and video coding and mining especially after the recent guidelines of the Motion Pictures Expert Group regarding MPEG-4 and MPEG-7 standards. Present work establishes a novel approach for extracting the motion and shape parameters of a rigid three-dimensional object on the basis of its orthographic projections and the associated motion field. Experimental results have been included to verify the theoretical analysis.

@article{Xirouhakis2000Least,
author={Yiannis Xirouhakis and Anastasios Delopoulos},
title={Least Squares Estimation of 3D Shape and Motion of Rigid Objects from their Orthographic Projections},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume={22},
number={4},
pages={393-399},
year={2000},
month={04},
date={2000-04-01},
url={http://dx.doi.org/10.1109/34.845382},
doi={http://10.1109/34.845382},
abstract={The extraction of motion and shape information of three-dimensional objects from their two-dimensional projections is a task that emerges in various applications such as computer vision, biomedical engineering, and video coding and mining especially after the recent guidelines of the Motion Pictures Expert Group regarding MPEG-4 and MPEG-7 standards. Present work establishes a novel approach for extracting the motion and shape parameters of a rigid three-dimensional object on the basis of its orthographic projections and the associated motion field. Experimental results have been included to verify the theoretical analysis.}
}

1999

(J)
Sotirios Pavlopoulos and Anastasios Delopoulos
"Designing and implementing the transition to a fully digital hospital"
IEEE Transactions on Information Technology in Biomedicine, 3, (1), pp. 6-19, 1999 Mar
[Abstract][BibTex][pdf]

The increase in the number of examinations performed in modern healthcare institutions in conjunction with the range of imaging modalities available today have resulted in a tremendous increase in the number of medical images generated and has made the need for a dedicated system able to acquire, distribute, and store medical image data very attractive. Within the framework of the Hellenic R&D program, we have designed and implemented a picture archiving and communication system for a high-tech cardiosurgery hospital in Greece. The system is able to handle in a digital form images produced from ultrasound, X-ray angiography, ?-camera, chest X-rays, as well as electrocardiogram signals. Based on the adoption of an open architecture highly relying on the DICOM standard, the system enables the smooth transition from the existing procedures to a fully digital operation mode and the integration of all existing medical equipment to the new central archiving system.

@article{Pavlopoulos1999Designing,
author={Sotirios Pavlopoulos and Anastasios Delopoulos},
title={Designing and implementing the transition to a fully digital hospital},
journal={IEEE Transactions on Information Technology in Biomedicine},
volume={3},
number={1},
pages={6-19},
year={1999},
month={03},
date={1999-03-01},
url={http://dx.doi.org/10.1109/4233.748971},
doi={http://10.1109/4233.748971},
abstract={The increase in the number of examinations performed in modern healthcare institutions in conjunction with the range of imaging modalities available today have resulted in a tremendous increase in the number of medical images generated and has made the need for a dedicated system able to acquire, distribute, and store medical image data very attractive. Within the framework of the Hellenic R&D program, we have designed and implemented a picture archiving and communication system for a high-tech cardiosurgery hospital in Greece. The system is able to handle in a digital form images produced from ultrasound, X-ray angiography, ?-camera, chest X-rays, as well as electrocardiogram signals. Based on the adoption of an open architecture highly relying on the DICOM standard, the system enables the smooth transition from the existing procedures to a fully digital operation mode and the integration of all existing medical equipment to the new central archiving system.}
}

1998

(J)
Stefanos Kollias and Anastasios Delopoulos
"Multiresolution Techniques and their Applications to Image Recognition"
Expert Systems Techniques and Applications, 1998 Jan
[Abstract][BibTex]

@article{Kollias1998Multiresolution,
author={Stefanos Kollias and Anastasios Delopoulos},
title={Multiresolution Techniques and their Applications to Image Recognition},
journal={Expert Systems Techniques and Applications},
year={1998},
month={01},
date={1998-01-01}
}

1996

(J)
Anastasios Delopoulos and Georgios B. Giannakis
"Cumulant Based Identification of Noisy Closed-Loop Systems"
International journal of adaptive control and signal processing, 10, (2-3), pp. 303-317, 1996 Mar
[Abstract][BibTex][pdf]

Conventional parameter estimation approaches fail to identify linear systems operating in closed loop when both input and output measurements are contaminated by additive noise of unknown (cross-)spectral characteristics. However, even in the absence of measurement noise, parameter estimation is involved owing to the additive system noise entering the loop. The present work introduces a novel criterion which is theoretically insensitive to a class of disturbances and yields the same parameter estimates that one obtains using mean squared error (MSE) minimization in the absence of noise. A strongly convergent sample-based approximation of the proposed criterion is introduced for consistent parameter estimation in practice. It is also shown that in the common case of ARMA modelling the resulting parameter estimates coincide with those obtained from a set of linear equations which can be solved using a time-recursive algorithm. Simulation results are presented to verify the performance of the proposed schemes in low-signal-to-noise-ratio environments.

@article{Delopoulos1996Cumulant,
author={Anastasios Delopoulos and Georgios B. Giannakis},
title={Cumulant Based Identification of Noisy Closed-Loop Systems},
journal={International journal of adaptive control and signal processing},
volume={10},
number={2-3},
pages={303-317},
year={1996},
month={03},
date={1996-03-01},
url={http://mug.ee.auth.gr/wp-content/uploads/3-303-AID-ACS352-3.0.pdf},
abstract={Conventional parameter estimation approaches fail to identify linear systems operating in closed loop when both input and output measurements are contaminated by additive noise of unknown (cross-)spectral characteristics. However, even in the absence of measurement noise, parameter estimation is involved owing to the additive system noise entering the loop. The present work introduces a novel criterion which is theoretically insensitive to a class of disturbances and yields the same parameter estimates that one obtains using mean squared error (MSE) minimization in the absence of noise. A strongly convergent sample-based approximation of the proposed criterion is introduced for consistent parameter estimation in practice. It is also shown that in the common case of ARMA modelling the resulting parameter estimates coincide with those obtained from a set of linear equations which can be solved using a time-recursive algorithm. Simulation results are presented to verify the performance of the proposed schemes in low-signal-to-noise-ratio environments.}
}

(J)
Anastasios Delopoulos and Stefanos Kollias
"Optimal filter banks for signal reconstruction from noisy subband components"
IEEE Transactions on Signal Processing, 44, (2), pp. 212-224, 1996 Feb
[Abstract][BibTex][pdf]

Conventional design techniques for analysis and synthesis filters in subband processing applications guarantee perfect reconstruction of the original signal from its subband components. The resulting filters, however, lose their optimality when additive noise due, for example, to signal quantization, disturbs the subband sequences. We propose filter design techniques that minimize the reconstruction mean squared error (MSE) taking into account the second order statistics of signals and noise in the case of either stochastic or deterministic signals. A novel recursive, pseudo-adaptive algorithm is proposed for efficient design of these filters. Analysis and derivations are extended to 2-D signals and filters using powerful Kronecker product notation. A prototype application of the proposed ideas in subband coding is presented. Simulations illustrate the superior performance of the proposed filter banks versus conventional perfect reconstruction filters in the presence of additive subband noise.

@article{Delopoulos1996Optimal,
author={Anastasios Delopoulos and Stefanos Kollias},
title={Optimal filter banks for signal reconstruction from noisy subband components},
journal={IEEE Transactions on Signal Processing},
volume={44},
number={2},
pages={212-224},
year={1996},
month={02},
date={1996-02-01},
url={http://mug.ee.auth.gr/wp-content/uploads/18.pdf},
doi={http://10.1109/78.485918},
abstract={Conventional design techniques for analysis and synthesis filters in subband processing applications guarantee perfect reconstruction of the original signal from its subband components. The resulting filters, however, lose their optimality when additive noise due, for example, to signal quantization, disturbs the subband sequences. We propose filter design techniques that minimize the reconstruction mean squared error (MSE) taking into account the second order statistics of signals and noise in the case of either stochastic or deterministic signals. A novel recursive, pseudo-adaptive algorithm is proposed for efficient design of these filters. Analysis and derivations are extended to 2-D signals and filters using powerful Kronecker product notation. A prototype application of the proposed ideas in subband coding is presented. Simulations illustrate the superior performance of the proposed filter banks versus conventional perfect reconstruction filters in the presence of additive subband noise.}
}

(J)
Nikos G. Panagiotidis, Anastasios N. Delopoulos and Stefanos D. Kollias
"Application-driven computation of optimum quantization tables for DCT-based block coders"
Digital Compression Technologies and Systems for Video Communications, 617, pp. 617-622, 1996 Sep
[Abstract][BibTex][pdf]

In this paper we propose a method for computing optimal quantization tables for specific images. The main criterion for this processing is the allocation of bandwidth in frequency subspaces in the DCT-domain according to power metrics obtained from the transform coefficients. Choice of the weights determines the subjective importance of each frequency coefficient as well as its contribution the finally perceived image. The simultaneous requirement that the quantization tables yield data compression comparable to the one achieved by the baseline JPEG scheme at various quality factors (QF) imposes an additional constraint to the proposed model.

@article{Panagiotidis,
author={Nikos G. Panagiotidis and Anastasios N. Delopoulos and Stefanos D. Kollias},
title={Application-driven computation of optimum quantization tables for DCT-based block coders},
journal={Digital Compression Technologies and Systems for Video Communications},
volume={617},
pages={617-622},
year={1996},
month={09},
date={1996-09-16},
url={http://dx.doi.org/10.1117/12.251318},
doi={http://10.1117/12.251318},
abstract={In this paper we propose a method for computing optimal quantization tables for specific images. The main criterion for this processing is the allocation of bandwidth in frequency subspaces in the DCT-domain according to power metrics obtained from the transform coefficients. Choice of the weights determines the subjective importance of each frequency coefficient as well as its contribution the finally perceived image. The simultaneous requirement that the quantization tables yield data compression comparable to the one achieved by the baseline JPEG scheme at various quality factors (QF) imposes an additional constraint to the proposed model.}
}

1995

(J)
Georgios B. Giannakis and Anastasios Delopoulos
"Cumulant based autocorrelation estimates of non-Gaussian linear processes"
Signal Processing, 47, (1), pp. 1-17, 1995 Nov
[Abstract][BibTex][pdf]

Autocorrelation of linear random processes can be expressed in terms of their cumulants. Theoretical insensitivity of the latter to additive Gaussian noise of unknown covariance, is exploited in this paper to develop (within a scale) autocorrelation estimators of linear non-Gaussian time series using cumulants of order higher than two. Windowed projections of third-order cumulants are shown to yield strongly consistent estimators of the autocorrelation sequence. Both batch and recursive algorithms are derived. Asymptotic variance expressions of the proposed estimators are also presented. Simulations are provided to illustrate the performance of the proposed algorithms and compare them with conventional approaches.

@article{Giannakis2000Cumulant,
author={Georgios B. Giannakis and Anastasios Delopoulos},
title={Cumulant based autocorrelation estimates of non-Gaussian linear processes},
journal={Signal Processing},
volume={47},
number={1},
pages={1-17},
year={1995},
month={11},
date={1995-11-01},
url={http://dx.doi.org/10.1016/0165-1684(95)00095-X},
doi={http://10.1016/0165-1684(95)00095-X},
abstract={Autocorrelation of linear random processes can be expressed in terms of their cumulants. Theoretical insensitivity of the latter to additive Gaussian noise of unknown covariance, is exploited in this paper to develop (within a scale) autocorrelation estimators of linear non-Gaussian time series using cumulants of order higher than two. Windowed projections of third-order cumulants are shown to yield strongly consistent estimators of the autocorrelation sequence. Both batch and recursive algorithms are derived. Asymptotic variance expressions of the proposed estimators are also presented. Simulations are provided to illustrate the performance of the proposed algorithms and compare them with conventional approaches.}
}

(J)
Andreas Tirakis, Anastasios Delopoulos and Stefanos Kollias
"2-D Filter Bank Design for Optimal Reconstruction using Limited Subband Information"
IEEE Transanctions on Image Processing, 4, (8), pp. 1160-1165, 1995 Aug
[Abstract][BibTex][pdf]

In this correspondence, we propose design techniques for analysis and synthesis filters of 2-D perfect reconstruction filter banks (PRFB\'s) that perform optimal reconstruction when a reduced number of subband signals is used. Based on the minimization of the squared error between the original signal and some low-resolution representation of it, the 2-D filters are optimally adjusted to the statistics of the input images so that most of the signal\'s energy is concentrated in the first few subband components. This property makes the optimal PRFB\'s efficient for image compression and pattern representations at lower resolutions for classification purposes. By extending recently introduced ideas from frequency domain principal component analysis to two dimensions, we present results for general 2-D discrete nonstationary and stationary second-order processes, showing that the optimal filters are nonseparable. Particular attention is paid to separable random fields, proving that only the first and last filters of the optimal PRFB are separable in this case. Simulation results that illustrate the theoretical achievements are presented.

@article{Tirakis1995Filter,
author={Andreas Tirakis and Anastasios Delopoulos and Stefanos Kollias},
title={2-D Filter Bank Design for Optimal Reconstruction using Limited Subband Information},
journal={IEEE Transanctions on Image Processing},
volume={4},
number={8},
pages={1160-1165},
year={1995},
month={08},
date={1995-08-01},
url={http://dx.doi.org/10.1109/83.403423},
doi={http://10.1109/83.403423},
abstract={In this correspondence, we propose design techniques for analysis and synthesis filters of 2-D perfect reconstruction filter banks (PRFB\\'s) that perform optimal reconstruction when a reduced number of subband signals is used. Based on the minimization of the squared error between the original signal and some low-resolution representation of it, the 2-D filters are optimally adjusted to the statistics of the input images so that most of the signal\\'s energy is concentrated in the first few subband components. This property makes the optimal PRFB\\'s efficient for image compression and pattern representations at lower resolutions for classification purposes. By extending recently introduced ideas from frequency domain principal component analysis to two dimensions, we present results for general 2-D discrete nonstationary and stationary second-order processes, showing that the optimal filters are nonseparable. Particular attention is paid to separable random fields, proving that only the first and last filters of the optimal PRFB are separable in this case. Simulation results that illustrate the theoretical achievements are presented.}
}

1994

(J)
Anastasios Delopoulos and Georgios B. Giannakis
"Consistent identification of stochastic linear systems with noisy input-output data"
Automatica, 30, (8), pp. 1271-1294, 1994 Aug
[Abstract][BibTex][pdf]

A novel criterion is introduced for parametric errors-in-variables identification of stochastic linear systems excited by non-Gaussian inputs. The new criterion is (at least theoretically) insensitive to a class of input-output disturbances because it implicitly involves higher- than second-order cumulant statistics. In addition, it is shown to be equivalent to the conventional Mean-Squared Error (MSE) as if the latter was computed in the ideal case of noise-free input-output data. The sampled version of the criterion converges to the novel MSE and guarantees strongly consistent parameter estimators. The asymptotic behavior of the resulting parameter estimators is analyzed and guidelines for minimum variance experiments are discussed briefly. Informative enough input signals and persistent of excitation conditions are specified. Computatonally attractive Recursive-Least-Squares variants are also developed for on-line implementation of ARMA modeling, and their potential is illustrated by applying them to time-delay estimation in low SNR environment. The performance of the proposed algorithms and comparisons with conventional methods are corroborated using simulated data.

@article{Delopoulos1994Consistent,
author={Anastasios Delopoulos and Georgios B. Giannakis},
title={Consistent identification of stochastic linear systems with noisy input-output data},
journal={Automatica},
volume={30},
number={8},
pages={1271-1294},
year={1994},
month={08},
date={1994-08-01},
url={http://mug.ee.auth.gr/wp-content/uploads/1-s2.0-0005109894901082-main.pdf},
doi={http://10.1016/0005-1098(94)90108-2},
abstract={A novel criterion is introduced for parametric errors-in-variables identification of stochastic linear systems excited by non-Gaussian inputs. The new criterion is (at least theoretically) insensitive to a class of input-output disturbances because it implicitly involves higher- than second-order cumulant statistics. In addition, it is shown to be equivalent to the conventional Mean-Squared Error (MSE) as if the latter was computed in the ideal case of noise-free input-output data. The sampled version of the criterion converges to the novel MSE and guarantees strongly consistent parameter estimators. The asymptotic behavior of the resulting parameter estimators is analyzed and guidelines for minimum variance experiments are discussed briefly. Informative enough input signals and persistent of excitation conditions are specified. Computatonally attractive Recursive-Least-Squares variants are also developed for on-line implementation of ARMA modeling, and their potential is illustrated by applying them to time-delay estimation in low SNR environment. The performance of the proposed algorithms and comparisons with conventional methods are corroborated using simulated data.}
}

(J)
Anastasios Delopoulos, A. Tirakis and Stephanos Kollias
"Invariant image classification using triple-correlation-based neural networks"
IEEE Transactions on Neural Networks, 5, (3), pp. 392-408, 1994 May
[Abstract][BibTex][pdf]

Triple-correlation-based neural networks are introduced and used in this paper for invariant classification of 2D gray scale images. Third-order correlations of an image are appropriately clustered, in spatial or spectral domain, to generate an equivalent image representation that is invariant with respect to translation, rotation, and dilation. An efficient implementation scheme is also proposed, which is robust to distortions, insensitive to additive noise, and classifies the original image using adequate neural network architectures applied directly to 2D image representations. Third-order neural networks are shown to be a specific category of triple-correlation-based networks, applied either to binary or gray-scale images. A simulation study is given, which illustrates the theoretical developments, using synthetic and real image data.

@article{Delopoulos1994Invariant,
author={Anastasios Delopoulos and A. Tirakis and Stephanos Kollias},
title={Invariant image classification using triple-correlation-based neural networks},
journal={IEEE Transactions on Neural Networks},
volume={5},
number={3},
pages={392-408},
year={1994},
month={05},
date={1994-05-01},
url={http://dx.doi.org/10.1109/72.286911},
doi={http://10.1109/72.286911},
abstract={Triple-correlation-based neural networks are introduced and used in this paper for invariant classification of 2D gray scale images. Third-order correlations of an image are appropriately clustered, in spatial or spectral domain, to generate an equivalent image representation that is invariant with respect to translation, rotation, and dilation. An efficient implementation scheme is also proposed, which is robust to distortions, insensitive to additive noise, and classifies the original image using adequate neural network architectures applied directly to 2D image representations. Third-order neural networks are shown to be a specific category of triple-correlation-based networks, applied either to binary or gray-scale images. A simulation study is given, which illustrates the theoretical developments, using synthetic and real image data.}
}

1992

(J)
Anastasios Delopoulos and Georgios B. Giannakis
"Strongly consistent identification algorithms and noise insensitive MSE criteria"
IEEE Transactions on Signal Processing, 40, (8), pp. 1955-1970, 1992 Aug
[Abstract][BibTex][pdf]

Windowed cumulant projections of nonGaussian linear processes yield autocorrelation estimators which are immune to additive Gaussian noise of unknown covariance. By establishing strong consistency of these estimators, strongly consistent and noise insensitive recursive algorithms are developed for parameter estimation. These computationally attractive schemes are shown to be optimal with respect to a modified mean-square-error (MSE) criterion which implicitly exploits the high signal-to-noise ratio domain of cumulant statistics. The novel MSE objective function is expressed in terms of the noisy process, but it is shown to be a scalar multiple of the standard MSE criterion as if the latter was computed in the absence of noise. Simulations illustrate the performance of the proposed algorithms and compare them with the conventional algorithms.

@article{Delopoulos1992Strongly,
author={Anastasios Delopoulos and Georgios B. Giannakis},
title={Strongly consistent identification algorithms and noise insensitive MSE criteria},
journal={IEEE Transactions on Signal Processing},
volume={40},
number={8},
pages={1955-1970},
year={1992},
month={08},
date={1992-08-01},
url={http://dx.doi.org/10.1109/78.149997},
doi={http://10.1109/78.149997},
abstract={Windowed cumulant projections of nonGaussian linear processes yield autocorrelation estimators which are immune to additive Gaussian noise of unknown covariance. By establishing strong consistency of these estimators, strongly consistent and noise insensitive recursive algorithms are developed for parameter estimation. These computationally attractive schemes are shown to be optimal with respect to a modified mean-square-error (MSE) criterion which implicitly exploits the high signal-to-noise ratio domain of cumulant statistics. The novel MSE objective function is expressed in terms of the noisy process, but it is shown to be a scalar multiple of the standard MSE criterion as if the latter was computed in the absence of noise. Simulations illustrate the performance of the proposed algorithms and compare them with the conventional algorithms.}
}

1990

(J)
Georgios Giannakis and Anastasios Delopoulos
"Nonparametric estimation of autocorrelation and spectra using cumulants and polyspectra"
Advanced Signal Processing Algorithms, Architectures, and Implementations, 503, 1990 Nov
[Abstract][BibTex][pdf]

Autocorrelation and specira of linear random processes can be expressed in terms of cumulants and polyspectra respectively. The insensitivity of the latter to additive Gaussian noise of unknown covariance is exploited in this paper to develop spectral estimators of deterministic and linear non-Gaussian signals using polyspectra. In the time-domain windowed projections of third-order cumulants are shown to yield consistent estimators of the autocorrelation sequence. Both batch and recursive algorithms are derived. In the frequency-domain a Fourier-slice solution and a least-squares approach are described for performing spectral analysis through windowed bi-periodograms. Asymptotic variance expressions of the time- and frequencydomain estimators are also presented. Two-dimensional extensions are indicated and potential applications are discussed. Simulations are provided to illustrate the performance of the proposed algorithms and compare them with conventional approaches.

@article{Giannakis1990Nonparametric,
author={Georgios Giannakis and Anastasios Delopoulos},
title={Nonparametric estimation of autocorrelation and spectra using cumulants and polyspectra},
journal={Advanced Signal Processing Algorithms, Architectures, and Implementations},
volume={503},
year={1990},
month={11},
date={1990-11-01},
url={http://dx.doi.org/10.1117/12.23504},
doi={http://10.1117/12.23504},
abstract={Autocorrelation and specira of linear random processes can be expressed in terms of cumulants and polyspectra respectively. The insensitivity of the latter to additive Gaussian noise of unknown covariance is exploited in this paper to develop spectral estimators of deterministic and linear non-Gaussian signals using polyspectra. In the time-domain windowed projections of third-order cumulants are shown to yield consistent estimators of the autocorrelation sequence. Both batch and recursive algorithms are derived. In the frequency-domain a Fourier-slice solution and a least-squares approach are described for performing spectral analysis through windowed bi-periodograms. Asymptotic variance expressions of the time- and frequencydomain estimators are also presented. Two-dimensional extensions are indicated and potential applications are discussed. Simulations are provided to illustrate the performance of the proposed algorithms and compare them with conventional approaches.}
}