Publications



2019

(C)
Alexandros Papadopoulos , Konstantinos Kyritsis , Ioannis Sarafis and Anastasios Delopoulos
"Multiple-Instance Learning for In-The-Wild Parkinsonian Tremor Detection"
41th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, 2019 Jul
[Abstract][BibTex][pdf]

Parkinson’s Disease (PD) is a neurodegenerativedisorder that manifests through slowly progressing symptoms,such as tremor, voice degradation and bradykinesia. Automateddetection of such symptoms has recently received much atten-tion by the research community, owing to the clinical benefitsassociated with the early diagnosis of the disease. Unfortunately,most of the approaches proposed so far, operate under a strictlylaboratory setting, thus limiting their potential applicability inreal world conditions. In this work, we present a method forautomatically detecting tremorous episodes related to PD, basedon acceleration signals. We propose to address the problemat hand, as a case ofMultiple-Instance Learning, wherein asubject is represented as an unordered bag of signal segmentsand a single, expert-provided, ground-truth. We employ adeep learning approach that combines feature learning and alearnable pooling stage and is trainable end-to-end. Results ona newly introduced dataset of accelerometer signals collectedin-the-wild confirm the validity of the proposed approach.

@conference{apadopoulos2019mil,
author={Alexandros Papadopoulos and Konstantinos Kyritsis and Ioannis Sarafis and Anastasios Delopoulos},
title={Multiple-Instance Learning for In-The-Wild Parkinsonian Tremor Detection},
booktitle={41th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
publisher={IEEE},
year={2019},
month={07},
date={2019-07-17},
url={https://mug.ee.auth.gr/wp-content/uploads/apadopoulos2019mil.pdf},
abstract={Parkinson’s Disease (PD) is a neurodegenerativedisorder that manifests through slowly progressing symptoms,such as tremor, voice degradation and bradykinesia. Automateddetection of such symptoms has recently received much atten-tion by the research community, owing to the clinical benefitsassociated with the early diagnosis of the disease. Unfortunately,most of the approaches proposed so far, operate under a strictlylaboratory setting, thus limiting their potential applicability inreal world conditions. In this work, we present a method forautomatically detecting tremorous episodes related to PD, basedon acceleration signals. We propose to address the problemat hand, as a case ofMultiple-Instance Learning, wherein asubject is represented as an unordered bag of signal segmentsand a single, expert-provided, ground-truth. We employ adeep learning approach that combines feature learning and alearnable pooling stage and is trainable end-to-end. Results ona newly introduced dataset of accelerometer signals collectedin-the-wild confirm the validity of the proposed approach.}
}

2018

(C)
Konstantinos Kyritsis, Christos Diou and Anastasios Delopoulos
40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, Honolulu, HI, USA, 2018 Oct
[Abstract][BibTex][pdf]

In this paper, we propose an end-to-end neural network (NN) architecture for detecting in-meal eating events (i.e., bites), using only a commercially available smartwatch. Our method combines convolutional and recurrent networks and is able to simultaneously learn intermediate data representations related to hand movements, as well as sequences of these movements that appear during eating. A promising F-score of 0.884 is achieved for detecting bites on a publicly available dataset with 10 subjects.

@conference{Kiritsis2018,
author={Konstantinos Kyritsis and Christos Diou and Anastasios Delopoulos},
title={End-to-end Learning for Measuring in-meal Eating Behavior from a Smartwatch},
booktitle={40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
publisher={IEEE},
address={Honolulu, HI, USA},
year={2018},
month={10},
date={2018-10-29},
url={http://mug.ee.auth.gr/wp-content/uploads/kyritsis2018end.pdf},
doi={http://%2010.1109/EMBC.2018.8513627},
issn={1558-4615},
isbn={978-1-5386-3647-3},
publisher's url={https://ieeexplore.ieee.org/abstract/document/8513627},
abstract={In this paper, we propose an end-to-end neural network (NN) architecture for detecting in-meal eating events (i.e., bites), using only a commercially available smartwatch. Our method combines convolutional and recurrent networks and is able to simultaneously learn intermediate data representations related to hand movements, as well as sequences of these movements that appear during eating. A promising F-score of 0.884 is achieved for detecting bites on a publicly available dataset with 10 subjects.}
}

(C)
Alexandros Papadopoulos, Konstantinos Kyritsis, Ioannis Sarafis and Anastasios Delopoulos
40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, Honolulu, HI, USA, 2018 Oct
[Abstract][BibTex][pdf]

Automated monitoring and analysis of eating behaviour patterns, i.e., “how one eats”, has recently received much attention by the research community, owing to the association of eating patterns with health-related problems and especially obesity and its comorbidities. In this work, we introduce an improved method for meal micro-structure analysis. Stepping on a previous methodology of ours that combines feature extraction, SVM micro-movement classification and LSTM sequence modelling, we propose a method to adapt a pretrained IMU-based food intake cycle detection model to a new subject, with the purpose of improving model performance for that subject. We split model training into two stages. First, the model is trained using standard supervised learning techniques. Then, an adaptation step is performed, where the model is fine-tuned on unlabeled samples of the target subject via semisupervised learning. Evaluation is performed on a publicly available dataset that was originally created and used in [1] and has been extended here to demonstrate the effect of the semisupervised approach, where the proposed method improves over the baseline method.

@conference{papadopoulos2018personalised,
author={Alexandros Papadopoulos and Konstantinos Kyritsis and Ioannis Sarafis and Anastasios Delopoulos},
title={Personalised meal eating behaviour analysis via semi-supervised learning},
booktitle={40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
publisher={IEEE},
address={Honolulu, HI, USA},
year={2018},
month={10},
date={2018-10-29},
url={http://mug.ee.auth.gr/wp-content/uploads/papadopoulos2018personalised.pdf},
doi={http://10.1109/EMBC.2018.8513174},
publisher's url={https://ieeexplore.ieee.org/document/8513174},
abstract={Automated monitoring and analysis of eating behaviour patterns, i.e., “how one eats”, has recently received much attention by the research community, owing to the association of eating patterns with health-related problems and especially obesity and its comorbidities. In this work, we introduce an improved method for meal micro-structure analysis. Stepping on a previous methodology of ours that combines feature extraction, SVM micro-movement classification and LSTM sequence modelling, we propose a method to adapt a pretrained IMU-based food intake cycle detection model to a new subject, with the purpose of improving model performance for that subject. We split model training into two stages. First, the model is trained using standard supervised learning techniques. Then, an adaptation step is performed, where the model is fine-tuned on unlabeled samples of the target subject via semisupervised learning. Evaluation is performed on a publicly available dataset that was originally created and used in [1] and has been extended here to demonstrate the effect of the semisupervised approach, where the proposed method improves over the baseline method.}
}

2017

(C)
Vasilis Papapanagiotou, Christos Diou, Lingjuan Zhou, Janet van den Boer, Monica Mars and Anastasios Delopoulos
2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 817-820, IEEE, 2017 Jul
[Abstract][BibTex][pdf]

Monitoring of eating behavior using wearable technology is receiving increased attention, driven by the recent advances in wearable devices and mobile phones. One particularly interesting aspect of eating behavior is the monitoring of chewing activity and eating occurrences. There are several chewing sensor types and chewing detection algorithms proposed in the bibliography, however no datasets are publicly available to facilitate evaluation and further research. In this paper, we present a multi-modal dataset of over 60 hours of recordings from 14 participants in semi-free living conditions, collected in the context of the SPLENDID project. The dataset includes raw signals from a photoplethysmography (PPG) sensor and a 3D accelerometer, and a set of extracted features from audio recordings; detailed annotations and ground truth are also provided both at eating event level and at individual chew level. We also provide a baseline evaluation method, and introduce the “challenge” of improving the baseline chewing detection algorithms. The dataset can be downloaded from http: //dx.doi.org/10.17026/dans-zxw-v8gy, and supplementary code can be downloaded from https://github. com/mug-auth/chewing-detection-challenge.git.

@inproceedings{8036949,
author={Vasilis Papapanagiotou and Christos Diou and Lingjuan Zhou and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={The SPLENDID chewing detection challenge},
booktitle={2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={817-820},
publisher={IEEE},
year={2017},
month={07},
date={2017-07-01},
url={http://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2017splendid.pdf},
doi={http://10.1109/EMBC.2017.8036949},
publisher's url={http://ieeexplore.ieee.org/document/8036949/},
abstract={Monitoring of eating behavior using wearable technology is receiving increased attention, driven by the recent advances in wearable devices and mobile phones. One particularly interesting aspect of eating behavior is the monitoring of chewing activity and eating occurrences. There are several chewing sensor types and chewing detection algorithms proposed in the bibliography, however no datasets are publicly available to facilitate evaluation and further research. In this paper, we present a multi-modal dataset of over 60 hours of recordings from 14 participants in semi-free living conditions, collected in the context of the SPLENDID project. The dataset includes raw signals from a photoplethysmography (PPG) sensor and a 3D accelerometer, and a set of extracted features from audio recordings; detailed annotations and ground truth are also provided both at eating event level and at individual chew level. We also provide a baseline evaluation method, and introduce the “challenge” of improving the baseline chewing detection algorithms. The dataset can be downloaded from http: //dx.doi.org/10.17026/dans-zxw-v8gy, and supplementary code can be downloaded from https://github. com/mug-auth/chewing-detection-challenge.git.}
}

(C)
Vasilis Papapanagiotou, Christos Diou and Anastasios Delopoulos
2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1258-1261, 2017 Jul
[Abstract][BibTex][pdf]

Detecting chewing sounds from a microphone placed inside the outer ear for eating behaviour monitoring still remains a challenging task. This is mainly due the difficulty in discriminating non-chewing sounds (e.g. speech or sounds caused by walking) from chews, as well as due to to the high variability of the chewing sounds of different food types. Most approaches rely on detecting distictive structures on the sound wave, or on extracting a set of features and using a classifier to detect chews. In this work, we propose to use feature-learning in the time domain with 1-dimensional convolutional neural networks for for chewing detection. We apply a network of convolutional layers followed by fully connected layers directly on windows of the audio samples to detect chewing activity, and then aggregate individual chews to eating events. Experimental results on a large, semi-free living dataset collected in the context of the SPLENDID project indicate high effectiveness, with an accuracy of 0.980 and F1 score of 0.883.

@inproceedings{8037060,
author={Vasilis Papapanagiotou and Christos Diou and Anastasios Delopoulos},
title={Chewing detection from an in-ear microphone using convolutional neural networks},
booktitle={2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={1258-1261},
year={2017},
month={07},
date={2017-07-01},
url={http://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2017chewing.pdf},
doi={http://10.1109/EMBC.2017.8037060},
publisher's url={http://ieeexplore.ieee.org/document/8037060/},
abstract={Detecting chewing sounds from a microphone placed inside the outer ear for eating behaviour monitoring still remains a challenging task. This is mainly due the difficulty in discriminating non-chewing sounds (e.g. speech or sounds caused by walking) from chews, as well as due to to the high variability of the chewing sounds of different food types. Most approaches rely on detecting distictive structures on the sound wave, or on extracting a set of features and using a classifier to detect chews. In this work, we propose to use feature-learning in the time domain with 1-dimensional convolutional neural networks for for chewing detection. We apply a network of convolutional layers followed by fully connected layers directly on windows of the audio samples to detect chewing activity, and then aggregate individual chews to eating events. Experimental results on a large, semi-free living dataset collected in the context of the SPLENDID project indicate high effectiveness, with an accuracy of 0.980 and F1 score of 0.883.}
}

(C)
Konstantinos Kyritsis, Christina L. Tatli, Christos Diou and Aanastasios Delopoulos
2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2843-2846, IEEE, Seogwipo, South Korea, 2017 Jul
[Abstract][BibTex][pdf]

Automatic objective monitoring of eating behavior using inertial sensors is a research problem that has received a lot of attention recently, mainly due to the mass availability of IMUs and the evidence on the importance of quantifying and monitoring eating patterns. In this paper we propose a method for detecting food intake cycles during the course of a meal using a commercially available wristband. We first model micro-movements that are part of the intake cycle and then use HMMs to model the sequences of micro-movements leading to mouthfuls. Evaluation is carried out on an annotated dataset of 8 subjects where the proposed method achieves 0:78 precision and 0:77 recall. The evaluation dataset is publicly available at http://mug.ee.auth.gr/intake-cycle-detection/.

@inproceedings{8037449,
author={Konstantinos Kyritsis and Christina L. Tatli and Christos Diou and Aanastasios Delopoulos},
title={Automated analysis of in meal eating behavior using a commercial wristband IMU sensor},
booktitle={2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={2843-2846},
publisher={IEEE},
address={Seogwipo, South Korea},
year={2017},
month={07},
date={2017-07-01},
url={http://mug.ee.auth.gr/wp-content/uploads/kyritsis2017automated.pdf},
doi={http://10.1109/EMBC.2017.8037449},
publisher's url={http://ieeexplore.ieee.org/document/8037449/?reload=true},
abstract={Automatic objective monitoring of eating behavior using inertial sensors is a research problem that has received a lot of attention recently, mainly due to the mass availability of IMUs and the evidence on the importance of quantifying and monitoring eating patterns. In this paper we propose a method for detecting food intake cycles during the course of a meal using a commercially available wristband. We first model micro-movements that are part of the intake cycle and then use HMMs to model the sequences of micro-movements leading to mouthfuls. Evaluation is carried out on an annotated dataset of 8 subjects where the proposed method achieves 0:78 precision and 0:77 recall. The evaluation dataset is publicly available at http://mug.ee.auth.gr/intake-cycle-detection/.}
}

(C)
Christos Diou, Ioannis Sarafis, Ioannis Ioakimidis and Anastasios Delopoulos
"Data-driven assessments for sensor measurements of eating behavior"
Biomedical & Health Informatics (BHI), 2017 IEEE EMBS International Conference on, pp. 129-132, 2017 Jan
[Abstract][BibTex][pdf]

Two major challenges in sensor-based measurement and assessment of healthy eating behavior are (a) choosing the behavioral indicators to be measured, and (b) interpreting the measured values. While much of the work towards solving these problems belongs in the domain of behavioral science, there are several areas where technology can help. This paper outlines an approach for representing and interpreting eating and activity behavior based on sensor measurements and data available from a reference population. The main idea is to assess the “similarity” of an individual\'s behavior to previous data recordings of a relevant reference population. Thus, by appropriate selection of the indicators and reference data it is possible to perform comparative behavioral evaluation and support decisions, even in cases where no clear medical guidelines for the indicator values exist. We examine the simple, univariate case (one indicator) and then extend these ideas to the multivariate problem (several indicators) using one-class SVM to measure the distance from the reference population.

@inproceedings{diou2017data,
author={Christos Diou and Ioannis Sarafis and Ioannis Ioakimidis and Anastasios Delopoulos},
title={Data-driven assessments for sensor measurements of eating behavior},
booktitle={Biomedical & Health Informatics (BHI), 2017 IEEE EMBS International Conference on},
pages={129-132},
year={2017},
month={01},
date={2017-01-01},
url={http://ieeexplore.ieee.org/document/7897222/},
abstract={Two major challenges in sensor-based measurement and assessment of healthy eating behavior are (a) choosing the behavioral indicators to be measured, and (b) interpreting the measured values. While much of the work towards solving these problems belongs in the domain of behavioral science, there are several areas where technology can help. This paper outlines an approach for representing and interpreting eating and activity behavior based on sensor measurements and data available from a reference population. The main idea is to assess the “similarity” of an individual\\'s behavior to previous data recordings of a relevant reference population. Thus, by appropriate selection of the indicators and reference data it is possible to perform comparative behavioral evaluation and support decisions, even in cases where no clear medical guidelines for the indicator values exist. We examine the simple, univariate case (one indicator) and then extend these ideas to the multivariate problem (several indicators) using one-class SVM to measure the distance from the reference population.}
}

(C)
Iason Karakostas, Vasileios Papapanagiotou and Anastasios Delopoulos
New Trends in Image Analysis and Processing -- ICIAP 2017, pp. 403-410, Springer International Publishing, Cham, 2017 Dec
[Abstract][BibTex]

Monitoring of eating activity is a well-established yet challenging problem. Various sensors have been proposed in the literature, including in-ear microphones, strain sensors, and photoplethysmography. Most of these approaches use detection algorithms that include machine learning; however, a universal, non user-specific model is usually trained from an available dataset for the final system. In this paper, we present a chewing detection system that can adapt to each user independently using active learning (AL) with minimal intrusiveness. The system captures audio from a commercial bone-conduction microphone connected to an Android smart-phone. We employ a state-of-the-art feature extraction algorithm and extend the Support Vector Machine (SVM) classification stage using AL. The effectiveness of the adaptable classification model can quickly converge to that achieved when using the entire available training set. We further use AL to create SVM models with a small number of support vectors, thus reducing the computational requirements, without significantly sacrificing effectiveness. To support our arguments, we have recorded a dataset from eight participants, each performing once or twice a standard protocol that includes consuming various types of food, as well as non-eating activities such as silent and noisy environments and conversation. Results show accuracy of 0.85 and F1 score of 0.83 in the best case for the user-specific models.

@inproceedings{Karakostas2017,
author={Iason Karakostas and Vasileios Papapanagiotou and Anastasios Delopoulos},
title={Building Parsimonious SVM Models for Chewing Detection and Adapting Them to the User},
booktitle={New Trends in Image Analysis and Processing -- ICIAP 2017},
pages={403-410},
publisher={Springer International Publishing},
editor={Sebastiano Battiato, Giovanni Maria Farinella, Marco Leo, Giovanni Gallo},
address={Cham},
year={2017},
month={12},
date={2017-12-31},
doi={http://10.1007/978-3-319-70742-6_38},
publisher's url={https://link.springer.com/chapter/10.1007/978-3-319-70742-6_38},
abstract={Monitoring of eating activity is a well-established yet challenging problem. Various sensors have been proposed in the literature, including in-ear microphones, strain sensors, and photoplethysmography. Most of these approaches use detection algorithms that include machine learning; however, a universal, non user-specific model is usually trained from an available dataset for the final system. In this paper, we present a chewing detection system that can adapt to each user independently using active learning (AL) with minimal intrusiveness. The system captures audio from a commercial bone-conduction microphone connected to an Android smart-phone. We employ a state-of-the-art feature extraction algorithm and extend the Support Vector Machine (SVM) classification stage using AL. The effectiveness of the adaptable classification model can quickly converge to that achieved when using the entire available training set. We further use AL to create SVM models with a small number of support vectors, thus reducing the computational requirements, without significantly sacrificing effectiveness. To support our arguments, we have recorded a dataset from eight participants, each performing once or twice a standard protocol that includes consuming various types of food, as well as non-eating activities such as silent and noisy environments and conversation. Results show accuracy of 0.85 and F1 score of 0.83 in the best case for the user-specific models.}
}

(C)
Angelos Katharopoulos, Despoina Paschalidou, Christos Diou and Anastasios Delopoulos
"Learning local feature aggregation functions with backpropagation"
25th European Signal Processing Conference (EUSIPCO), pp. 748-752, IEEE, Kos, Greece, 2017 Aug
[Abstract][BibTex][pdf]

This paper introduces a family of local feature aggregation functions and a novel method to estimate their parameters, such that they generate optimal representations for classification (or any task that can be expressed as a cost function minimization problem). To achieve that, we compose the local feature aggregation function with the classifier cost function and we backpropagate the gradient of this cost function in order to update the local feature aggregation function parameters. Experiments on synthetic datasets indicate that our method discovers parameters that model the class-relevant information in addition to the local feature space. Further experiments on a variety of motion and visual descriptors, both on image and video datasets, show that our method outperforms other state-of-the-art local feature aggregation functions, such as Bag of Words, Fisher Vectors and VLAD, by a large margin.

@inproceedings{Katharopoulos2017,
author={Angelos Katharopoulos and Despoina Paschalidou and Christos Diou and Anastasios Delopoulos},
title={Learning local feature aggregation functions with backpropagation},
booktitle={25th European Signal Processing Conference (EUSIPCO)},
pages={748-752},
publisher={IEEE},
address={Kos, Greece},
year={2017},
month={08},
date={2017-08-28},
url={https://arxiv.org/pdf/1706.08580.pdf},
doi={http://10.23919/EUSIPCO.2017.8081307},
abstract={This paper introduces a family of local feature aggregation functions and a novel method to estimate their parameters, such that they generate optimal representations for classification (or any task that can be expressed as a cost function minimization problem). To achieve that, we compose the local feature aggregation function with the classifier cost function and we backpropagate the gradient of this cost function in order to update the local feature aggregation function parameters. Experiments on synthetic datasets indicate that our method discovers parameters that model the class-relevant information in addition to the local feature space. Further experiments on a variety of motion and visual descriptors, both on image and video datasets, show that our method outperforms other state-of-the-art local feature aggregation functions, such as Bag of Words, Fisher Vectors and VLAD, by a large margin.}
}

(C)
Konstantinos Kyritsis, Christos Diou and Anastasios Delopoulos
New Trends in Image Analysis and Processing -- ICIAP 2017: ICIAP International Workshops, pp. 411-418, Springer International Publishing, Catania, Italy, 2017 Sep
[Abstract][BibTex][pdf]

Unobtrusive analysis of eating behavior based on Inertial Measurement Unit (IMU) sensors (e.g. accelerometer) is a topic that has attracted the interest of both the industry and the research community over the past years. This work presents a method for detecting food intake moments that occur during a meal session using the accelerometer and gyroscope signals of an off-the-shelf smartwatch. We propose a two step approach. First, we model the hand micro-movements that take place while eating using an array of binary Support Vector Machines (SVMs); then the detection of intake moments is achieved by processing the sequence of SVM score vectors by a Long Short Term Memory (LSTM) network. Evaluation is performed on a publicly available dataset with 10 subjects, where the proposed method outperforms similar approaches by achieving an F1 score of 0.892.

@inproceedings{Kyritsis2017ICIAP,
author={Konstantinos Kyritsis and Christos Diou and Anastasios Delopoulos},
title={Food Intake Detection from Inertial Sensors Using LSTM Networks},
booktitle={New Trends in Image Analysis and Processing -- ICIAP 2017: ICIAP International Workshops},
pages={411-418},
publisher={Springer International Publishing},
editor={Battiato, Sebastiano and Farinella, Giovanni Maria and Leo, Marco and Gallo, Giovanni},
address={Catania, Italy},
year={2017},
month={09},
date={2017-09-11},
url={https://mug.ee.auth.gr/wp-content/uploads/madima2017.pdf},
doi={http://10.1007/978-3-319-70742-6_39},
publisher's url={https://link.springer.com/chapter/10.1007/978-3-319-70742-6_39},
keywords={Food intake;Eating monitoring;Wearable sensors;LSTM},
abstract={Unobtrusive analysis of eating behavior based on Inertial Measurement Unit (IMU) sensors (e.g. accelerometer) is a topic that has attracted the interest of both the industry and the research community over the past years. This work presents a method for detecting food intake moments that occur during a meal session using the accelerometer and gyroscope signals of an off-the-shelf smartwatch. We propose a two step approach. First, we model the hand micro-movements that take place while eating using an array of binary Support Vector Machines (SVMs); then the detection of intake moments is achieved by processing the sequence of SVM score vectors by a Long Short Term Memory (LSTM) network. Evaluation is performed on a publicly available dataset with 10 subjects, where the proposed method outperforms similar approaches by achieving an F1 score of 0.892.}
}

2016

(C)
Vasilis Papapanagiotou, Christos Diou, Lingchuan Zhou, Janet van den Boer, Monica Mars and Anastasios Delopoulos
2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6485-6488, 2016 Aug
[Abstract][BibTex][pdf]

Monitoring of human eating behaviour has been attracting interest over the last few years, as a means to a healthy lifestyle, but also due to its association with serious health conditions, such as eating disorders and obesity. Use of self-reports and other non-automated means of monitoring have been found to be unreliable, compared to the use of wearable sensors. Various modalities have been reported, such as acoustic signal from ear-worn microphones, or signal from wearable strain sensors. In this work, we introduce a new sensor for the task of chewing detection, based on a novel photoplethysmography (PPG) sensor placed on the outer earlobe to perform the task. We also present a processing pipeline that includes two chewing detection algorithms from literature and one new algorithm, to process the captured PPG signal, and present their effectiveness. Experiments are performed on an annotated dataset recorded from 21 individuals, including more than 10 hours of eating and non-eating activities. Results show that the PPG sensor can be successfully used to support dietary monitoring.

@inproceedings{7592214,
author={Vasilis Papapanagiotou and Christos Diou and Lingchuan Zhou and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={A novel approach for chewing detection based on a wearable PPG sensor},
booktitle={2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={6485-6488},
year={2016},
month={08},
date={2016-08-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2016novel.pdf},
doi={http://10.1109/EMBC.2016.7592214},
publisher's url={http://ieeexplore.ieee.org/document/7592214/},
keywords={Ear;Microphones;Monitoring;Medical disorders;Optical sensors;Patient monitoring;Photoplethysmography;PPG signal;Acoustic signal;Chewing detection;Dietary monitoring;Ear-worn microphone;Eating disorder;Health condition;Human eating behaviour monitoring;Noneating Activity;Obesity;Wearable PPG sensor;Wearable sensor;Wearable strain sensor;Light emitting diodes;Muscles;Pipelines;Prototypes},
abstract={Monitoring of human eating behaviour has been attracting interest over the last few years, as a means to a healthy lifestyle, but also due to its association with serious health conditions, such as eating disorders and obesity. Use of self-reports and other non-automated means of monitoring have been found to be unreliable, compared to the use of wearable sensors. Various modalities have been reported, such as acoustic signal from ear-worn microphones, or signal from wearable strain sensors. In this work, we introduce a new sensor for the task of chewing detection, based on a novel photoplethysmography (PPG) sensor placed on the outer earlobe to perform the task. We also present a processing pipeline that includes two chewing detection algorithms from literature and one new algorithm, to process the captured PPG signal, and present their effectiveness. Experiments are performed on an annotated dataset recorded from 21 individuals, including more than 10 hours of eating and non-eating activities. Results show that the PPG sensor can be successfully used to support dietary monitoring.}
}

(C)
Angelos Katharopoulos, Despoina Paschalidou, Christos Diou and Anastasios Delopoulos
"Fast Supervised LDA for discovering micro-events in large-scale video datasets"
In proceedings of the 24th ACM international conference on multimedia (ACM-MM 2016), Amsterdam, The Netherlands, 2016 Oct
[Abstract][BibTex][pdf]

This paper introduces fsLDA, a fast variational inference method for supervised LDA, which overcomes the computational limitations of the original supervised LDA and enables its application in large-scale video datasets. In addition to its scalability, our method also overcomes the drawbacks of standard, unsupervised LDA for video, including its focus on dominant but often irrelevant video information (e.g. background, camera motion). As a result, experiments in the UCF11 and UCF101 datasets show that our method consistently outperforms unsupervised LDA in every metric. Furthermore, analysis shows that class-relevant topics of fsLDA lead to sparse video representations and encapsulate high-level information corresponding to parts of video events, which we denote \'\'micro-events\'\'.

@inproceedings{KatharopoulosACMMM2016,
author={Angelos Katharopoulos and Despoina Paschalidou and Christos Diou and Anastasios Delopoulos},
title={Fast Supervised LDA for discovering micro-events in large-scale video datasets},
booktitle={In proceedings of the 24th ACM international conference on multimedia (ACM-MM 2016)},
address={Amsterdam, The Netherlands},
year={2016},
month={10},
date={2016-10-15},
url={http://mug.ee.auth.gr/wp-content/uploads/fsLDA.pdf},
abstract={This paper introduces fsLDA, a fast variational inference method for supervised LDA, which overcomes the computational limitations of the original supervised LDA and enables its application in large-scale video datasets. In addition to its scalability, our method also overcomes the drawbacks of standard, unsupervised LDA for video, including its focus on dominant but often irrelevant video information (e.g. background, camera motion). As a result, experiments in the UCF11 and UCF101 datasets show that our method consistently outperforms unsupervised LDA in every metric. Furthermore, analysis shows that class-relevant topics of fsLDA lead to sparse video representations and encapsulate high-level information corresponding to parts of video events, which we denote \\'\\'micro-events\\'\\'.}
}

2015

(C)
Vasileios Papapanagiotou, Christos Diou, Billy Langlet, Ioannis Ioakimidis and Anastasios Delopoulos
Bioinformatics and Biomedical Engineering: Third International Conference, IWBBIO 2015, Granada, Spain, April 15-17, 2015. Proceedings, Part II, pp. 35-46, Springer International Publishing, Cham, 2015 Jan
[Abstract][BibTex]

Recent studies and clinical practice have shown that the extraction of detailed eating behaviour indicators is critical in identifying risk factors and/or treating obesity and eating disorders, such as anorexia and bulimia nervosa. A number of single meal analysis methods that have been successfully applied are based on the Mandometer, a weight scale that continuously measures the weight of food on a plate over the course of a meal. Experimental meal analysis is performed using the cumulative food intake curve, which is produced by the semi-automatic processing of the Mandometer weight measurements, in tandem with the video recordings of the eating session. Due to its complexity and the video recording dependence, this process is not suited to a clinical or a real-life setting. In this work, we evaluate a method for automating the extraction of an accurate food intake curve, corrected for food additions during the meal and artificial weight fluctuations, using only the raw Mandometer output. Since the method requires no manual corrections or external video recordings it is appropriate for clinical or free-living use. Three algorithms are presented based on rules, greedy decisioning and exhaustive search, as well as evaluation methods of the Mandometer measurements. Experiments on a set of 114 meals collected from both normal and disordered eaters in a clinical environment illustrate the effectiveness of the proposed approach.

@inproceedings{Papapanagiotou2015Automated,
author={Vasileios Papapanagiotou and Christos Diou and Billy Langlet and Ioannis Ioakimidis and Anastasios Delopoulos},
title={Automated Extraction of Food Intake Indicators from Continuous Meal Weight Measurements},
booktitle={Bioinformatics and Biomedical Engineering: Third International Conference, IWBBIO 2015, Granada, Spain, April 15-17, 2015. Proceedings, Part II},
pages={35-46},
publisher={Springer International Publishing},
editor={Ortu{\~{no, Francisco and Rojas, Ignacio},
address={Cham},
year={2015},
month={01},
date={2015-01-01},
doi={http://10.1007/978-3-319-16480-9_4},
isbn={978-3-319-16480-9},
publisher's url={https://link.springer.com/chapter/10.1007/978-3-319-16480-9_4},
abstract={Recent studies and clinical practice have shown that the extraction of detailed eating behaviour indicators is critical in identifying risk factors and/or treating obesity and eating disorders, such as anorexia and bulimia nervosa. A number of single meal analysis methods that have been successfully applied are based on the Mandometer, a weight scale that continuously measures the weight of food on a plate over the course of a meal. Experimental meal analysis is performed using the cumulative food intake curve, which is produced by the semi-automatic processing of the Mandometer weight measurements, in tandem with the video recordings of the eating session. Due to its complexity and the video recording dependence, this process is not suited to a clinical or a real-life setting. In this work, we evaluate a method for automating the extraction of an accurate food intake curve, corrected for food additions during the meal and artificial weight fluctuations, using only the raw Mandometer output. Since the method requires no manual corrections or external video recordings it is appropriate for clinical or free-living use. Three algorithms are presented based on rules, greedy decisioning and exhaustive search, as well as evaluation methods of the Mandometer measurements. Experiments on a set of 114 meals collected from both normal and disordered eaters in a clinical environment illustrate the effectiveness of the proposed approach.}
}

(C)
Vasileios Papapanagiotou , Christos Diou, Zhou Lingchuan, Janet van den Boer, Monica Mars and Anastasios Delopoulos
New Trends in Image Analysis and Processing--ICIAP 2015 Workshops, pp. 401-408, 2015 Apr
[Abstract][BibTex][pdf]

In the battle against Obesity as well as Eating Disorders, non-intrusive dietary monitoring has been investigated by many researchers. For this purpose, one of the most promising modalities is the acoustic signal captured by a common microphone placed inside the outer ear canal. Various chewing detection algorithms for this type of signals exist in the literature. In this work, we perform a systematic analysis of the fractal nature of chewing sounds, and find that the Fractal Dimension is substantially different between chewing and talking. This holds even for severely down-sampled versions of the recordings. We derive chewing detectors based on the the fractal dimension of the recorded signals that can clearly discriminate chewing from non-chewing sounds. We experimentally evaluate snacking detection based on the proposed chewing detector, and we compare our approach against well known counterparts. Experimental results on a large dataset of 10 subjects and total recordings duration of more than 8 hours demonstrate the high effectiveness of our method. Furthermore, there exists indication that discrimination between different properties (such as crispness) is possible.

@inproceedings{Papapanagiotou2015Fractal,
author={Vasileios Papapanagiotou and Christos Diou and Zhou Lingchuan and Janet van den Boer and Monica Mars and Anastasios Delopoulos},
title={Fractal Nature of Chewing Sounds},
booktitle={New Trends in Image Analysis and Processing--ICIAP 2015 Workshops},
pages={401-408},
year={2015},
month={04},
date={2015-04-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2015fractal.pdf},
doi={http://10.1007/978-3-319-23222-5_49},
publisher's url={https://link.springer.com/chapter/10.1007/978-3-319-23222-5_49},
abstract={In the battle against Obesity as well as Eating Disorders, non-intrusive dietary monitoring has been investigated by many researchers. For this purpose, one of the most promising modalities is the acoustic signal captured by a common microphone placed inside the outer ear canal. Various chewing detection algorithms for this type of signals exist in the literature. In this work, we perform a systematic analysis of the fractal nature of chewing sounds, and find that the Fractal Dimension is substantially different between chewing and talking. This holds even for severely down-sampled versions of the recordings. We derive chewing detectors based on the the fractal dimension of the recorded signals that can clearly discriminate chewing from non-chewing sounds. We experimentally evaluate snacking detection based on the proposed chewing detector, and we compare our approach against well known counterparts. Experimental results on a large dataset of 10 subjects and total recordings duration of more than 8 hours demonstrate the high effectiveness of our method. Furthermore, there exists indication that discrimination between different properties (such as crispness) is possible.}
}

(C)
Vasileios Papapanagiotou, Christos Diou, Billy Langlet, Ioannis Ioakimidis and Anastasios Delopoulos
2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 7853-7856, IEEE, 2015 Aug
[Abstract][BibTex][pdf]

Monitoring and modification of eating behaviour through continuous meal weight measurements has been successfully applied in clinical practice to treat obesity and eating disorders. For this purpose, the Mandometer, a plate scale, along with video recordings of subjects during the course of single meals, has been used to assist clinicians in measuring relevant food intake parameters. In this work, we present a novel algorithm for automatically constructing a subject\'s food intake curve using only the Mandometer weight measurements. This eliminates the need for direct clinical observation or video recordings, thus significantly reducing the manual effort required for analysis. The proposed algorithm aims at identifying specific meal related events (e.g. bites, food additions, artifacts), by applying an adaptive pre-processing stage using Delta coefficients, followed by event detection based on a parametric Probabilistic Context-Free Grammar on the derivative of the recorded sequence. Experimental results on a dataset of 114 meals from individuals suffering from obesity or eating disorders, as well as from individuals with normal BMI, demonstrate the effectiveness of the proposed approach.

@inproceedings{Papapanagiotou2015Parametric,
author={Vasileios Papapanagiotou and Christos Diou and Billy Langlet and Ioannis Ioakimidis and Anastasios Delopoulos},
title={A parametric Probabilistic Context-Free Grammar for food intake analysis based on continuous meal weight measurements},
booktitle={2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages={7853-7856},
publisher={IEEE},
year={2015},
month={08},
date={2015-08-01},
url={https://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2015parametric.pdf},
doi={http://10.1109/EMBC.2015.7320212},
publisher's url={https://ieeexplore.ieee.org/document/7320212/},
abstract={Monitoring and modification of eating behaviour through continuous meal weight measurements has been successfully applied in clinical practice to treat obesity and eating disorders. For this purpose, the Mandometer, a plate scale, along with video recordings of subjects during the course of single meals, has been used to assist clinicians in measuring relevant food intake parameters. In this work, we present a novel algorithm for automatically constructing a subject\\\'s food intake curve using only the Mandometer weight measurements. This eliminates the need for direct clinical observation or video recordings, thus significantly reducing the manual effort required for analysis. The proposed algorithm aims at identifying specific meal related events (e.g. bites, food additions, artifacts), by applying an adaptive pre-processing stage using Delta coefficients, followed by event detection based on a parametric Probabilistic Context-Free Grammar on the derivative of the recorded sequence. Experimental results on a dataset of 114 meals from individuals suffering from obesity or eating disorders, as well as from individuals with normal BMI, demonstrate the effectiveness of the proposed approach.}
}

2014

(C)
Christos Maramis, Christos Diou, Ioannis Ioakeimidis, Irini Lekka, Gabriela Dudnik, Monica Mars, Nikos Maglaveras, Cecilia Bergh and Anastasios Delopoulos
"SPLENDID: Preventing Obesity and Eating Disorders through Long-term Behavioural Modifications"
MOBIHEALTH 2014, ATHENES, Greece, 2014 Nov
[Abstract][BibTex]

@inproceedings{Maramis2017SPLENDID,
author={Christos Maramis and Christos Diou and Ioannis Ioakeimidis and Irini Lekka and Gabriela Dudnik and Monica Mars and Nikos Maglaveras and Cecilia Bergh and Anastasios Delopoulos},
title={SPLENDID: Preventing Obesity and Eating Disorders through Long-term Behavioural Modifications},
booktitle={MOBIHEALTH 2014},
address={ATHENES, Greece},
year={2014},
month={11},
date={2014-11-01}
}

(C)
Ioannis Sarafis, Christos Diou and Anastasios Delopoulos
"Building Robust Concept Detectors from Clickthrough Data: A Study in the MSR-Bing Dataset"
2014 9th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP), pp. 66-71, 2014 Nov
[Abstract][BibTex][pdf]

In this paper we extend our previous work on strategies for automatically constructing noise resilient SVM detectors from click through data for large scale concept-based image retrieval. First, search log data is used in conjunction with Information Retrieval (IR) models to score images with respect to each concept. The IR models evaluated in this work include Vector Space Models (VSM), BM25 and Language Models (LM). The scored images are then used to create training sets for SVM and appropriate sample weights for two SVM variants: the Fuzzy SVM (FSVM) and the Power SVM (PSVM). These SVM variants incorporate weights for each individual training sample and can therefore be used to model label uncertainty at the classifier level. Experiments on the MSR-Bing Image Retrieval Grand Challenge dataset (consisting of 1M images and 82.3M unique clicks) show that FSVM is the most robust SVM algorithm for handling label noise and that the highest performance is achieved with weights derived from VSM. These results extend our previous findings on the value of FSVM from professional image archives to large-scale general purpose search engines, and furthermore identify VSM as the most appropriate sample weighting model.

@inproceedings{Sarafis2014Building,
author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},
title={Building Robust Concept Detectors from Clickthrough Data: A Study in the MSR-Bing Dataset},
booktitle={2014 9th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP)},
pages={66-71},
year={2014},
month={11},
date={2014-11-01},
url={http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6978955},
doi={http://10.1109/SMAP.2014},
abstract={In this paper we extend our previous work on strategies for automatically constructing noise resilient SVM detectors from click through data for large scale concept-based image retrieval. First, search log data is used in conjunction with Information Retrieval (IR) models to score images with respect to each concept. The IR models evaluated in this work include Vector Space Models (VSM), BM25 and Language Models (LM). The scored images are then used to create training sets for SVM and appropriate sample weights for two SVM variants: the Fuzzy SVM (FSVM) and the Power SVM (PSVM). These SVM variants incorporate weights for each individual training sample and can therefore be used to model label uncertainty at the classifier level. Experiments on the MSR-Bing Image Retrieval Grand Challenge dataset (consisting of 1M images and 82.3M unique clicks) show that FSVM is the most robust SVM algorithm for handling label noise and that the highest performance is achieved with weights derived from VSM. These results extend our previous findings on the value of FSVM from professional image archives to large-scale general purpose search engines, and furthermore identify VSM as the most appropriate sample weighting model.}
}

(C)
Ioannis Sarafis, Christos Diou, Theodora Tsikrika and Anastasios Delopoulos
"Weighted SVM from clickthrough data for image retrieval"
2014 IEEE International Conference on Image Processing (ICIP), pp. 3013-3017, 2014 Aug
[Abstract][BibTex][pdf]

In this paper we propose a novel approach to training noise-resilient concept detectors from clickthrough data collected by image search engines. We take advantage of the query logs to automatically produce concept detector training sets; these suffer though from label noise, i.e., erroneously assigned labels. We explore two alternative approaches for handling noisy training data at the classifier level by training concept detectors with two SVM variants: the Fuzzy SVM and the Power SVM. Experimental results on images collected from a professional image search engine indicate that 1) Fuzzy SVM outperforms both SVM and Power SVM and is the most effective approach towards handling label noise and 2) the performance gain of Fuzzy SVM compared to SVM increases progressively with the noise level in the training sets.

@inproceedings{Sarafis2014Weighted,
author={Ioannis Sarafis and Christos Diou and Theodora Tsikrika and Anastasios Delopoulos},
title={Weighted SVM from clickthrough data for image retrieval},
booktitle={2014 IEEE International Conference on Image Processing (ICIP)},
pages={3013-3017},
year={2014},
month={08},
date={2014-08-01},
url={http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=7025609},
doi={http://10.1109/ICIP.2014.7025609},
abstract={In this paper we propose a novel approach to training noise-resilient concept detectors from clickthrough data collected by image search engines. We take advantage of the query logs to automatically produce concept detector training sets; these suffer though from label noise, i.e., erroneously assigned labels. We explore two alternative approaches for handling noisy training data at the classifier level by training concept detectors with two SVM variants: the Fuzzy SVM and the Power SVM. Experimental results on images collected from a professional image search engine indicate that 1) Fuzzy SVM outperforms both SVM and Power SVM and is the most effective approach towards handling label noise and 2) the performance gain of Fuzzy SVM compared to SVM increases progressively with the noise level in the training sets.}
}

(C)
Theodora Tsikrika and Christos Diou
"Multi-evidence User Group Discovery in Professional Image Search"
Advances in Information Retrieval: 36th European Conference on IR Research, ECIR 2014, Amsterdam, The Netherlands, April 13-16, 2014., pp. 693-699, Springer International Publishing, Cham, 2014 Apr
[Abstract][BibTex][pdf]

This work evaluates the combination of multiple evidence for discovering groups of users with similar interests. User groups are created by analysing the search logs recorded for a sample of 149 users of a professional image search engine in conjunction with the textual and visual features of the clicked images, and evaluated by exploiting their topical classification. The results indicate that the discovered user groups are meaningful and that combining textual and visual features improves the homogeneity of the user groups compared to each individual feature.

@inproceedings{Tsikrika2014Multi,
author={Theodora Tsikrika and Christos Diou},
title={Multi-evidence User Group Discovery in Professional Image Search},
booktitle={Advances in Information Retrieval: 36th European Conference on IR Research, ECIR 2014, Amsterdam, The Netherlands, April 13-16, 2014.},
pages={693-699},
publisher={Springer International Publishing},
address={Cham},
year={2014},
month={04},
date={2014-04-13},
url={http://dx.doi.org/10.1007/978-3-319-06028-6_78},
doi={http://10.1007/978-3-319-06028-6_78},
abstract={This work evaluates the combination of multiple evidence for discovering groups of users with similar interests. User groups are created by analysing the search logs recorded for a sample of 149 users of a professional image search engine in conjunction with the textual and visual features of the clicked images, and evaluated by exploiting their topical classification. The results indicate that the discovered user groups are meaningful and that combining textual and visual features improves the homogeneity of the user groups compared to each individual feature.}
}

2013

(C)
Antonios Chrysopoulos, Christos Diou, Andreas L. Symeonidis and Pericles A. Mitkas
"Agent-Based Small-Scale Energy Consumer Models for Energy Portfolio Management"
International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2013 IEEE/WIC/ACM, pp. 94-101, IEEE, 2013 Nov
[Abstract][BibTex][pdf]

In contemporary power systems, residential consumers may account for up to 50% of a country\'s total electrical energy consumption. Even though they constitute a significant portion of the energy market, not much has been achieved towards eliminating the inability for energy suppliers to perform long-term portfolio management, thus maximizing their revenue. The root cause of these problems is the difficulty in modeling consumers\' behavior, based on their everyday activities and personal comfort. If one were able to provide targeted incentives based on consumer profiles, the expected impact and market benefits would be significant. This paper introduces a formal residential consumer modeling methodology, that allows (i) the decomposition of the observed electrical load curves into consumer activities and, (ii) the evaluation of the impact of behavioral changes on the household\'s aggregate load curve. Analyzing electrical consumption measurements from DEHEMS research project enabled the model extraction of real-life consumers. Experiments indicate that the proposed methodology produces accurate small-scale consumer models and verify that small shifts in appliance usage times are sufficient to achieve significant peak power reduction.

@inproceedings{Chrysopoulos2013Agent,
author={Antonios Chrysopoulos and Christos Diou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Agent-Based Small-Scale Energy Consumer Models for Energy Portfolio Management},
booktitle={International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2013 IEEE/WIC/ACM},
pages={94-101},
publisher={IEEE},
year={2013},
month={11},
date={2013-11-17},
url={http://dx.doi.org/10.1109/WI-IAT.2013.96},
doi={http://10.1109/WI-IAT.2013.96},
abstract={In contemporary power systems, residential consumers may account for up to 50% of a country\\'s total electrical energy consumption. Even though they constitute a significant portion of the energy market, not much has been achieved towards eliminating the inability for energy suppliers to perform long-term portfolio management, thus maximizing their revenue. The root cause of these problems is the difficulty in modeling consumers\\' behavior, based on their everyday activities and personal comfort. If one were able to provide targeted incentives based on consumer profiles, the expected impact and market benefits would be significant. This paper introduces a formal residential consumer modeling methodology, that allows (i) the decomposition of the observed electrical load curves into consumer activities and, (ii) the evaluation of the impact of behavioral changes on the household\\'s aggregate load curve. Analyzing electrical consumption measurements from DEHEMS research project enabled the model extraction of real-life consumers. Experiments indicate that the proposed methodology produces accurate small-scale consumer models and verify that small shifts in appliance usage times are sufficient to achieve significant peak power reduction.}
}

2012

(C)
Georgios T. Andreou, Andreas L. Symeonidis, Christos Diou, Pericles A. Mitkas and Dimitrios P. Labridis
"A framework for the implementation of large scale Demand Response"
2012 International Conference on Smart Grid Technology, Economics and Policies (SG-TEP), pp. 1-4, IEEE, 2012 Dec
[Abstract][BibTex][pdf]

The rationalization of electrical energy consumption is a constant goal driving research over the last decades. The pursuit of efficient solutions requires the involvement of electrical energy consumers through Demand Response programs. In this study, a framework is presented that can serve as a tool for designing and simulating Demand Response programs, aiming at energy efficiency through consumer behavioral change. It provides the capability to dynamically model groups of electrical energy consumers with respect to their consumption, as well as their behavior. This framework is currently under development within the scope of the EU funded FP7 project “CASSANDRA - A multivariate platform for assessing the impact of strategic decisions in electrical power systems”.

@inproceedings{Andreou2012Framework,
author={Georgios T. Andreou and Andreas L. Symeonidis and Christos Diou and Pericles A. Mitkas and Dimitrios P. Labridis},
title={A framework for the implementation of large scale Demand Response},
booktitle={2012 International Conference on Smart Grid Technology, Economics and Policies (SG-TEP)},
pages={1-4},
publisher={IEEE},
year={2012},
month={12},
date={2012-12-03},
url={http://dx.doi.org/10.1109/SG-TEP.2012.6642380},
doi={http://10.1109/SG-TEP.2012.6642380},
abstract={The rationalization of electrical energy consumption is a constant goal driving research over the last decades. The pursuit of efficient solutions requires the involvement of electrical energy consumers through Demand Response programs. In this study, a framework is presented that can serve as a tool for designing and simulating Demand Response programs, aiming at energy efficiency through consumer behavioral change. It provides the capability to dynamically model groups of electrical energy consumers with respect to their consumption, as well as their behavior. This framework is currently under development within the scope of the EU funded FP7 project “CASSANDRA - A multivariate platform for assessing the impact of strategic decisions in electrical power systems”.}
}

2011

(C)
Christos F. Maramis, Anastasios N. Delopoulos, Alexandros F. Lambropoulos and Sokratis P. Katafigiotis
"A system for automatic HPV typing via PCR-RFLP gel electrophoresis"
2011 IEEE Conference on Automation Science and Engineering (CASE), pp. 549-556, IEEE, 2011 Aug
[Abstract][BibTex][pdf]

The identification of the types of the human papillomavirus (HPV) that have infected a female patient provides valuable information as regards to her risk for developing cervical cancer. A widely used method for performing the above task (namely HPV typing) is PCR-RFLP gel electrophoresis. However, the conventional HPV typing protocol is error-prone and resource-ineffective due to lack of interaction between the phases involved in it. In order to treat these shortcomings, we introduce a novel HPV typing system that can be built upon widely available laboratory equipment. The proposed workflow of the system automates the task of HPV typing via PCRRFLP gel electrophoresis. The proof-of-concept of the proposed methodology is evaluated via an experiment that emulates the operation of the introduced system on a set of real HPV data.

@inproceedings{Maramis2011System,
author={Christos F. Maramis and Anastasios N. Delopoulos and Alexandros F. Lambropoulos and Sokratis P. Katafigiotis},
title={A system for automatic HPV typing via PCR-RFLP gel electrophoresis},
booktitle={2011 IEEE Conference on Automation Science and Engineering (CASE)},
pages={549-556},
publisher={IEEE},
year={2011},
month={08},
date={2011-08-14},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/06042466.pd},
doi={http://10.1109/CASE.2011.6042466},
abstract={The identification of the types of the human papillomavirus (HPV) that have infected a female patient provides valuable information as regards to her risk for developing cervical cancer. A widely used method for performing the above task (namely HPV typing) is PCR-RFLP gel electrophoresis. However, the conventional HPV typing protocol is error-prone and resource-ineffective due to lack of interaction between the phases involved in it. In order to treat these shortcomings, we introduce a novel HPV typing system that can be built upon widely available laboratory equipment. The proposed workflow of the system automates the task of HPV typing via PCRRFLP gel electrophoresis. The proof-of-concept of the proposed methodology is evaluated via an experiment that emulates the operation of the introduced system on a set of real HPV data.}
}

2010

(C)
Christos Diou, George Stephanopoulos and Anastasios Delopoulos
"The Multimedia Understanding Group at TRECVID-2010"
Proceedings of the TRECVID 2010 Workshop, 2010 Jan
[Abstract][BibTex][pdf]

This is a report of the Multimedia Understanding Group participation in TRECVID-2010, where we submitted full runs for the Semantic Indexing (SIN) task. Our submission aims at experimentally evaluating three research items, that are important for work that is currently in progress. First, we examine the use of bag-of-words audio features for video concept detection, with noisy and/or low-quality video data. Although audio is important for some concepts and has shown promising results at other datasets, the results indicate that it can also lead to a decrease in performance when the quality is low and the negative examples are not adequately represented. We also explore the possibility of using a cross-domain concept fusion approach for reducing the number of dimensions at the final classifier. The corresponding experiments show, however, that when drastically reducing the number of dimensions the effectiveness drops. Finally, we also examined a transformation of the feature space, using a set of functions that are parametrically constructed from the data.

@inproceedings{Diou2010Multimedia,
author={Christos Diou and George Stephanopoulos and Anastasios Delopoulos},
title={The Multimedia Understanding Group at TRECVID-2010},
booktitle={Proceedings of the TRECVID 2010 Workshop},
year={2010},
month={01},
date={2010-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/mug-auth.pdf},
abstract={This is a report of the Multimedia Understanding Group participation in TRECVID-2010, where we submitted full runs for the Semantic Indexing (SIN) task. Our submission aims at experimentally evaluating three research items, that are important for work that is currently in progress. First, we examine the use of bag-of-words audio features for video concept detection, with noisy and/or low-quality video data. Although audio is important for some concepts and has shown promising results at other datasets, the results indicate that it can also lead to a decrease in performance when the quality is low and the negative examples are not adequately represented. We also explore the possibility of using a cross-domain concept fusion approach for reducing the number of dimensions at the final classifier. The corresponding experiments show, however, that when drastically reducing the number of dimensions the effectiveness drops. Finally, we also examined a transformation of the feature space, using a set of functions that are parametrically constructed from the data.}
}

(C)
Christos F. Maramis, Anastasios N. Delopoulos and Alexandros F. Lambropoulos
"Analysis of PCR-RFLP Gel Electrophoresis Images for Accurate and Automated HPV Typing"
Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine, pp. 1-6, IEEE, 2010 Nov
[Abstract][BibTex][pdf]

The identification of the types of the human papilomavirus (HPV) that have infected a woman provides valuable information as regards to her risk for developing cervical cancer. HPV typing is often performed by means of manually analyzing PCR-RFLP gel electrophoresis images. However, the typing procedure that is currently employed suffers from unsatisfactory accuracy and high time consumption. In order to treat these problems we propose a novel approach to HPV typing that automates the analysis of the electrophoretic images and concurrently improves the accuracy of the typing decision. The proposed methodology contributes both to the extraction of information from the images through a novel modeling approach and also to the process of making a typing decision based on the above information by the introduction of an original HPV typing algorithm. The efficiency of our approach is demonstrated with the help of a complex worked example that involves multiple HPV infections.

@inproceedings{Maramis2010Analysis,
author={Christos F. Maramis and Anastasios N. Delopoulos and Alexandros F. Lambropoulos},
title={Analysis of PCR-RFLP Gel Electrophoresis Images for Accurate and Automated HPV Typing},
booktitle={Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine},
pages={1-6},
publisher={IEEE},
year={2010},
month={11},
date={2010-11-03},
url={http://mug.ee.auth.gr/wp-content/uploads/itab2010.pdf},
doi={http://10.1109/ITAB.2010.5687732},
abstract={The identification of the types of the human papilomavirus (HPV) that have infected a woman provides valuable information as regards to her risk for developing cervical cancer. HPV typing is often performed by means of manually analyzing PCR-RFLP gel electrophoresis images. However, the typing procedure that is currently employed suffers from unsatisfactory accuracy and high time consumption. In order to treat these problems we propose a novel approach to HPV typing that automates the analysis of the electrophoretic images and concurrently improves the accuracy of the typing decision. The proposed methodology contributes both to the extraction of information from the images through a novel modeling approach and also to the process of making a typing decision based on the above information by the introduction of an original HPV typing algorithm. The efficiency of our approach is demonstrated with the help of a complex worked example that involves multiple HPV infections.}
}

(C)
Christos Maramis, Evangelia Minga and Anastasios Delopoulos
"An Application for Semi-automatic HPV Typing of PCR-RFLP Images"
Image Analysis and Recognition: 7th International Conference, ICIAR 2010, P{\'ovoa de Varzin, Portugal, June 21-23, 2010, Proceedings, Part II, pp. 173-184, Springer Berlin Heidelberg, 2010 Jun
[Abstract][BibTex][pdf]

The human papillomavirus, coming in over 100 flavors/types, is the causal factor of cervical cancer. The identification of the types that have infected the cervix of a patient is a very laborious yet critical task for molecular biologists that is still performed manually. HPV-Typer is a novel research software application that assists biologists by analyzing digitized images of electrophorized gel matrices that contain cervical samples processed by the PCR-RFLP technique in order to semi-automatically identify the existing types of the virus. HPVTyper has been designed to be functional under minimum user input conditions and yet to allow the user to intervene in any step of the typing procedure.

@inproceedings{Maramis2010Application,
author={Christos Maramis and Evangelia Minga and Anastasios Delopoulos},
title={An Application for Semi-automatic HPV Typing of PCR-RFLP Images},
booktitle={Image Analysis and Recognition: 7th International Conference, ICIAR 2010, P{\'ovoa de Varzin, Portugal, June 21-23, 2010, Proceedings, Part II},
pages={173-184},
publisher={Springer Berlin Heidelberg},
year={2010},
month={06},
date={2010-06-21},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/hpvmaramis10.pdf},
doi={http://10.1007/978-3-642-13775-4_18},
abstract={The human papillomavirus, coming in over 100 flavors/types, is the causal factor of cervical cancer. The identification of the types that have infected the cervix of a patient is a very laborious yet critical task for molecular biologists that is still performed manually. HPV-Typer is a novel research software application that assists biologists by analyzing digitized images of electrophorized gel matrices that contain cervical samples processed by the PCR-RFLP technique in order to semi-automatically identify the existing types of the virus. HPVTyper has been designed to be functional under minimum user input conditions and yet to allow the user to intervene in any step of the typing procedure.}
}

(C)
Christos Maramis and Anastasios Delopoulos
"Efficient Quantitative Information Extraction from PCR-RFLP Gel Electrophoresis Images"
2010 20th International Conference on Pattern Recognition (ICPR), pp. 2560 - 2563, IEEE, 2010 Aug
[Abstract][BibTex][pdf]

For the purpose of PCR-RFLP analysis, as in the case of human papillomavirus (HPV) typing, quantitative information needs to be extracted from images resulting from one-dimensional gel electrophoresis by associating the image intensity with the concentration of biological material at the corresponding position on a gel matrix. However, the background intensity of the image stands in the way of quantifying this association. We propose a novel, efficient methodology for modeling the image background with a polynomial function and prove that this can benefit the extraction of accurate information from the lane intensity profile when modeled by a superposition of properly shaped parametric functions.

@inproceedings{Maramis2010Efficient,
author={Christos Maramis and Anastasios Delopoulos},
title={Efficient Quantitative Information Extraction from PCR-RFLP Gel Electrophoresis Images},
booktitle={2010 20th International Conference on Pattern Recognition (ICPR)},
pages={2560 - 2563},
publisher={IEEE},
year={2010},
month={08},
date={2010-08-23},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/05595784.pdf},
doi={http://10.1109/ICPR.2010.627},
abstract={For the purpose of PCR-RFLP analysis, as in the case of human papillomavirus (HPV) typing, quantitative information needs to be extracted from images resulting from one-dimensional gel electrophoresis by associating the image intensity with the concentration of biological material at the corresponding position on a gel matrix. However, the background intensity of the image stands in the way of quantifying this association. We propose a novel, efficient methodology for modeling the image background with a polynomial function and prove that this can benefit the extraction of accurate information from the lane intensity profile when modeled by a superposition of properly shaped parametric functions.}
}

2009

(C)
Christos Diou, George Stephanopoulos, Nikos Dimitriou, Panagiotis Panagiotopoulos, Christos Papachristou, Anastasios Delopoulos, Henning Rode, Theodora Tsikrika, Arjen P. de Vries, Daniel Schneider, Jochen Schwenninger, Marie-Luce Viaud, Agnès Saulnier, Peter Altendorf, Birgit Schröter, Matthias Elser, Angel Rego, Alex Rodriguez, Cristina Martínez, Iñaki Etxaniz, Gérard Dupont, Bruno Grilhères, Nicolas Martin, Nozha Boujemaa, Alexis Joly, Raffi Enficiaud and Anne Verroust
"VITALAS at TRECVID 2009"
2009 TREC Video Retrieval Evaluation Workshop TRECVID-2009, 2009 Jan
[Abstract][BibTex][pdf]

This paper describes the participation of VITALAS in the TRECVID-2009 evaluation where we submitted runs for the High-Level Feature Extraction (HLFE) and Interactive Search tasks.For the HLFE task, we focus on the evaluation of low-level feature sets and fusion methods. The runs employ multiple low-level features based on all available modalities (visual,audio and text) and the results show that use of such features improves the retrieval effectiveness significantly. We also use a concept score fusion approach that achieves good results with reduced low-level feature vector dimensionality. Furthermore, a weighting scheme is introduced for cluster assignment in the “bag-of-words” approach. Our runs achieved good performance compared to a baseline run and the submissions of other TRECVID-2009 participants. For the Interactive Search task, we focus on the evaluation of the integrated VITALAS system in order to gain insights into the use and effectiveness of the system’s search functionalities on (the combination of) multiple modalities and study the behavior of two user groups: professional archivists and non-professional users. Our analysis indicates that both user groups submit about the same total number of queries and use the search functionalities in a similar way, but professional users save twice as many shots and examine shots deeper in the ranked retrieved list.The agreement between the TRECVID assessors and our users was quite low. In terms of the effectiveness of the different search modalities, similarity searches retrieve on average twice as many relevant shots as keyword searches, fused searches three times as many, while concept searches retrieve even up to five times as many relevant shots, indicating the benefits of the use of robust concept detectors in multimodal video retrieval.

@inproceedings{Diou2009VITALAS,
author={Christos Diou and George Stephanopoulos and Nikos Dimitriou and Panagiotis Panagiotopoulos and Christos Papachristou and Anastasios Delopoulos and Henning Rode and Theodora Tsikrika and Arjen P. de Vries and Daniel Schneider and Jochen Schwenninger and Marie-Luce Viaud and Agnès Saulnier and Peter Altendorf and Birgit Schröter and Matthias Elser and Angel Rego and Alex Rodriguez and Cristina Martínez and Iñaki Etxaniz and Gérard Dupont and Bruno Grilhères and Nicolas Martin and Nozha Boujemaa and Alexis Joly and Raffi Enficiaud and Anne Verroust},
title={VITALAS at TRECVID 2009},
booktitle={2009 TREC Video Retrieval Evaluation Workshop TRECVID-2009},
year={2009},
month={01},
date={2009-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/vitalas09.pdf},
abstract={This paper describes the participation of VITALAS in the TRECVID-2009 evaluation where we submitted runs for the High-Level Feature Extraction (HLFE) and Interactive Search tasks.For the HLFE task, we focus on the evaluation of low-level feature sets and fusion methods. The runs employ multiple low-level features based on all available modalities (visual,audio and text) and the results show that use of such features improves the retrieval effectiveness significantly. We also use a concept score fusion approach that achieves good results with reduced low-level feature vector dimensionality. Furthermore, a weighting scheme is introduced for cluster assignment in the “bag-of-words” approach. Our runs achieved good performance compared to a baseline run and the submissions of other TRECVID-2009 participants. For the Interactive Search task, we focus on the evaluation of the integrated VITALAS system in order to gain insights into the use and effectiveness of the system’s search functionalities on (the combination of) multiple modalities and study the behavior of two user groups: professional archivists and non-professional users. Our analysis indicates that both user groups submit about the same total number of queries and use the search functionalities in a similar way, but professional users save twice as many shots and examine shots deeper in the ranked retrieved list.The agreement between the TRECVID assessors and our users was quite low. In terms of the effectiveness of the different search modalities, similarity searches retrieve on average twice as many relevant shots as keyword searches, fused searches three times as many, while concept searches retrieve even up to five times as many relevant shots, indicating the benefits of the use of robust concept detectors in multimodal video retrieval.}
}

(C)
Manolis Falelakis, Lazaros Karydas and Anastasios Delopoulos
"Knowledge-Based Concept Score Fusion for Multimedia Retrieval"
WIC International Conference on Active Media Technologies, pp. 126-135, Springer Berlin Heidelberg, Berlin, Heidelberg, 2009 Jan
[Abstract][BibTex][pdf]

Automated detection of semantic concepts in multimedia documents has been attracting intensive research e?orts over the last years. These e?orts can be generally classi?ed in two categories of methodologies: the ones that attempt to solve the problem using discriminative methods (classi?ers) and those that build knowledge-based models, as driven by the W3C consortium. This paper proposes a methodology that tries to combine both approaches for multimedia retrieval. Our main contribution is the adoption of a formal model for de?ning concepts using logic and the incorporation of the output of concept classi?ers to the computation of annotation scores. Our method does not require the computationally intensive training of new classi?ers for the concepts de?ned. Instead, it employs a knowledge-based mechanism to combine the output score of existing classi?ers and can be used for ither detecting new concepts or enhancing the accuracy of existing detectors. Optimization procedures are employed to adapt the concept de?nitions to the multimedia corpus in hand, further improving the attained accuracy. Experiments using the TRECVID2005 video collection demonstrate promising results.

@inproceedings{Falelakis2009Knowledge,
author={Manolis Falelakis and Lazaros Karydas and Anastasios Delopoulos},
title={Knowledge-Based Concept Score Fusion for Multimedia Retrieval},
booktitle={WIC International Conference on Active Media Technologies},
pages={126-135},
publisher={Springer Berlin Heidelberg},
address={Berlin, Heidelberg},
year={2009},
month={01},
date={2009-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/10.1007_978-3-642-04875-3_17.pdf},
doi={http://10.1007/978-3-642-04875-3_17},
abstract={Automated detection of semantic concepts in multimedia documents has been attracting intensive research e?orts over the last years. These e?orts can be generally classi?ed in two categories of methodologies: the ones that attempt to solve the problem using discriminative methods (classi?ers) and those that build knowledge-based models, as driven by the W3C consortium. This paper proposes a methodology that tries to combine both approaches for multimedia retrieval. Our main contribution is the adoption of a formal model for de?ning concepts using logic and the incorporation of the output of concept classi?ers to the computation of annotation scores. Our method does not require the computationally intensive training of new classi?ers for the concepts de?ned. Instead, it employs a knowledge-based mechanism to combine the output score of existing classi?ers and can be used for ither detecting new concepts or enhancing the accuracy of existing detectors. Optimization procedures are employed to adapt the concept de?nitions to the multimedia corpus in hand, further improving the attained accuracy. Experiments using the TRECVID2005 video collection demonstrate promising results.}
}

(C)
Manolis Falelakis, Christos Maramis, Irini Lekka, Pericles Mitkas and Anastasios Delopoulos
"An Ontology for Supporting Clincal Research on Cervical Cancer"
International Conference on Knowledge Engineering and Ontology Development, pp. 103-108, Madeira, Portugal, 2009 Jan
[Abstract][BibTex][pdf]

This work presents an ontology for cervical cancer that is positioned in the center of a research system for conducting association studies. The ontology aims at providing a uni?ed ”language” for various heterogeneous medical repositories. To this end, it contains both generic patient-management and domain-speci?c concepts, as well as proper uni?cation rules. The inference scheme adopted is coupled with a procedural programming layer in order to comply with the design requirements.

@inproceedings{Falelakis2009Ontology,
author={Manolis Falelakis and Christos Maramis and Irini Lekka and Pericles Mitkas and Anastasios Delopoulos},
title={An Ontology for Supporting Clincal Research on Cervical Cancer},
booktitle={International Conference on Knowledge Engineering and Ontology Development},
pages={103-108},
address={Madeira, Portugal},
year={2009},
month={01},
date={2009-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/keod2009v22.pdf},
abstract={This work presents an ontology for cervical cancer that is positioned in the center of a research system for conducting association studies. The ontology aims at providing a uni?ed ”language” for various heterogeneous medical repositories. To this end, it contains both generic patient-management and domain-speci?c concepts, as well as proper uni?cation rules. The inference scheme adopted is coupled with a procedural programming layer in order to comply with the design requirements.}
}

(C)
Theodora Tsikrika, Christos Diou, Arjen P. de Vries and Anastasios Delopoulos
"Are clickthrough data reliable as image annotations?"
Proceedings of the Theseus/ImageCLEF workshop on visual information retrieval evaluation, 2009 Sep
[Abstract][BibTex][pdf]

We examine the reliability of clickthrough data as concept-based image annotations, by comparing them against manual annotations, for different concept categories. Our analysis shows that, for many concepts, the image annotations generated by using clickthrough data are reliable, with up to 90% of true positives in the automatically annotated images compared to the manual ground truth. Concept categories, though, do not provide additional evidence about the types of concepts for which clickthrough- based image annotation performs well.

@inproceedings{Tsikrika2009Clickthrough,
author={Theodora Tsikrika and Christos Diou and Arjen P. de Vries and Anastasios Delopoulos},
title={Are clickthrough data reliable as image annotations?},
booktitle={Proceedings of the Theseus/ImageCLEF workshop on visual information retrieval evaluation},
year={2009},
month={09},
date={2009-09-29},
url={http://mug.ee.auth.gr/wp-content/uploads/10.1.1.154.9693.pdf},
abstract={We examine the reliability of clickthrough data as concept-based image annotations, by comparing them against manual annotations, for different concept categories. Our analysis shows that, for many concepts, the image annotations generated by using clickthrough data are reliable, with up to 90% of true positives in the automatically annotated images compared to the manual ground truth. Concept categories, though, do not provide additional evidence about the types of concepts for which clickthrough- based image annotation performs well.}
}

(C)
Theodora Tsikrika, Christos Diou, Arjen P. de Vries and Anastasios Delopoulos
"Image Annotation Using Clickthrough Data"
Proceedings of the ACM International Conference on Image and Video Retrieval, ACM, New York, NY, USA, 2009 Jan
[Abstract][BibTex][pdf]

Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manual annotation effort. We investigate and evaluate this approach using a collection of 97,628 photographic images. The results indicate that the contribution of search log based training data is positive; in particular, the combination of manual and automatically generated training data outperforms the use of manual data alone. It is therefore possible to use clickthrough data to perform large-scale image annotation with little manual annotation effort or, depending on performance, using only the automatically generated training data. The datasets used as well as an extensive presentation of the experimental results can be accessed at http://olympus.ee.auth.gr/~diou/civr2009/.

@inproceedings{Tsikrika2009Image,
author={Theodora Tsikrika and Christos Diou and Arjen P. de Vries and Anastasios Delopoulos},
title={Image Annotation Using Clickthrough Data},
booktitle={Proceedings of the ACM International Conference on Image and Video Retrieval},
publisher={ACM},
address={New York, NY, USA},
year={2009},
month={01},
date={2009-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/a14-tsikrika.pdf},
doi={http://10.1145/1646396.1646415},
abstract={Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manual annotation effort. We investigate and evaluate this approach using a collection of 97,628 photographic images. The results indicate that the contribution of search log based training data is positive; in particular, the combination of manual and automatically generated training data outperforms the use of manual data alone. It is therefore possible to use clickthrough data to perform large-scale image annotation with little manual annotation effort or, depending on performance, using only the automatically generated training data. The datasets used as well as an extensive presentation of the experimental results can be accessed at http://olympus.ee.auth.gr/~diou/civr2009/.}
}

2008

(C)
Theodoros Agorastos, Pericles Mitkas, Manolis Falelakis, Fotis Psomopoulos, Anastasios Delopoulos, Andreas Symeonidis, Sotiris Diplaris, Christos Maramis, Alex andros Batzios, Irini Lekka, Vassilis Koutkias, Themistoklis Mikos, Antonios Tantsis and Nicos Maglaveras
"Large Scale Association Studies Using Unified Data for Cervical Cancer and beyond: The ASSIST Project"
World Cancer Congress, Geneva, Switzerland, 2008, 2008 Jan
[Abstract][BibTex]

@inproceedings{Agorastos2008Large,
author={Theodoros Agorastos and Pericles Mitkas and Manolis Falelakis and Fotis Psomopoulos and Anastasios Delopoulos and Andreas Symeonidis and Sotiris Diplaris and Christos Maramis and Alex andros Batzios and Irini Lekka and Vassilis Koutkias and Themistoklis Mikos and Antonios Tantsis and Nicos Maglaveras},
title={Large Scale Association Studies Using Unified Data for Cervical Cancer and beyond: The ASSIST Project},
booktitle={World Cancer Congress},
address={Geneva, Switzerland, 2008},
year={2008},
month={01},
date={2008-01-01}
}

(C)
Christos Dimou, Manolis Falelakis, Andreas Symeonidis, Anastasios Delopoulos and Pericles Mitkas
"Constructing Optimal Fuzzy Metric Trees for Agent Performance Evaluation"
IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT2008), pp. 336-339, Sydney, Australia, 2008 Jan
[Abstract][BibTex][pdf]

The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information.Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process incompliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.

@inproceedings{Dimou2008Constructing,
author={Christos Dimou and Manolis Falelakis and Andreas Symeonidis and Anastasios Delopoulos and Pericles Mitkas},
title={Constructing Optimal Fuzzy Metric Trees for Agent Performance Evaluation},
booktitle={IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT2008)},
pages={336-339},
address={Sydney, Australia},
year={2008},
month={01},
date={2008-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/Dimou-IAT-08.pdf},
abstract={The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information.Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process incompliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.}
}

(C)
Christos Diou, Christos Papachristou, Panagiotis Panagiotopoulos, George Stephanopoulos, Nikos Dimitriou, Anastasios Delopoulos, Henning Rode, Robin Aly, Arjen P. de Vries and Theodora Tsikrika
"VITALAS at TRECVID-2008"
6th TREC Video Retrieval Evaluation Workshop TRECVID08, Gaithersburg, USA, 2008 Jan
[Abstract][BibTex][pdf]

This is the ?rst participation of VITALAS in TRECVID. In the high level feature extraction task, our submitted runs are based mainly on visual features, while one run utilizes audio information as well; the text is not used. The experiments performed aim at evaluating the e?ectiveness of different approaches to input processing prior to the ?nal classi?cation (i.e., ranking) stage. These are (i) clustering of feature vectors within the feature space, (ii) fusion of classi?er output scores for other concepts and (iii) feature selection. The results indicate that (i) fusion of the classi?er output of other concepts can provide valuable information, even if the original features are not discriminative, (ii) feature selection generally improves the results (especially when the original number of dimensions is high) and (iii) clustering within the feature space with small number of clusters does not seem to provide any signi?cant additional information. Our experiments for the search task are focused on concept retrieval. We generate an arti?cial text collection by merging context descriptions according to the probability of each concept to occur in a given shot. To make the approach feasible, we further need to investigate techniques for pruning the dense shot concept matrix. Despite the poor overall retrieval quality, our concept search runs show a similar performance to the pure ASR run. Only the combination of ASR and concept search yields considerable improvements. Among the tested concept pruning strategies, the simple top k selection works better than the deviationbased thresholding.

@inproceedings{Diou2008VITALAS,
author={Christos Diou and Christos Papachristou and Panagiotis Panagiotopoulos and George Stephanopoulos and Nikos Dimitriou and Anastasios Delopoulos and Henning Rode and Robin Aly and Arjen P. de Vries and Theodora Tsikrika},
title={VITALAS at TRECVID-2008},
booktitle={6th TREC Video Retrieval Evaluation Workshop TRECVID08},
address={Gaithersburg, USA},
year={2008},
month={01},
date={2008-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/vitalas.pdf},
abstract={This is the ?rst participation of VITALAS in TRECVID. In the high level feature extraction task, our submitted runs are based mainly on visual features, while one run utilizes audio information as well; the text is not used. The experiments performed aim at evaluating the e?ectiveness of different approaches to input processing prior to the ?nal classi?cation (i.e., ranking) stage. These are (i) clustering of feature vectors within the feature space, (ii) fusion of classi?er output scores for other concepts and (iii) feature selection. The results indicate that (i) fusion of the classi?er output of other concepts can provide valuable information, even if the original features are not discriminative, (ii) feature selection generally improves the results (especially when the original number of dimensions is high) and (iii) clustering within the feature space with small number of clusters does not seem to provide any signi?cant additional information. Our experiments for the search task are focused on concept retrieval. We generate an arti?cial text collection by merging context descriptions according to the probability of each concept to occur in a given shot. To make the approach feasible, we further need to investigate techniques for pruning the dense shot concept matrix. Despite the poor overall retrieval quality, our concept search runs show a similar performance to the pure ASR run. Only the combination of ASR and concept search yields considerable improvements. Among the tested concept pruning strategies, the simple top k selection works better than the deviationbased thresholding.}
}

(C)
Pericles Mitkas, Christos Maramis, Anastasios Delopoulos, Andreas Symeonidis, Sotiris Diplaris, Manolis Falelakis, Fotis Psomopoulos, Alex andros Batzios, Nicos Maglaveras, Irini Lekka, Vassilis Koutkias, Theodoros Agorastos, Themistoklis Mikos and Antonios Tantsis
"ASSIST: Employing Inference and Semantic Technologies to Facilitate Association Studies on Cervical Cancer"
6th European Symposium on Biomedical Engineering, Chania, Crete, Greece, 2008 Jan
[Abstract][BibTex][pdf]

Advances in biomedical engineering have lately facilitated medical data acquisition, leading to increased availability of both genetic and phenotypic patient. Particularly, in the area of cervical cancer intensive research investigates the role of specific genetic and environmental factors in determining the persistence of the HPV virus – which is the primary causal factor of cervical cancer – and the subsequent progression of the disease. To this direction, genetic association studies constitute a widely used scientific approach for medical research. However, despite the increased data availability worldwide, individual studies are often inconclusive due to the physical and conceptual isolation of the medical centers that limit the pool of data actually available to each researcher. ASSIST, an EU-funded research project, aims at facilitating medical research on cervical cancer by tackling these data isolation issues. To accomplish that, it virtually unifies multiple patient record repositories, physically located at different sites and subsequently employs inferencing techniques on the unified medical knowledge to enable the execution of cervical cancer related association studies that comprise both genotypic and phenotypic study factors, allowing medical researchers to perform more complex and reliable association studies on larger, high-quality datasets.

@inproceedings{Mitkas2008ASSIST,
author={Pericles Mitkas and Christos Maramis and Anastasios Delopoulos and Andreas Symeonidis and Sotiris Diplaris and Manolis Falelakis and Fotis Psomopoulos and Alex andros Batzios and Nicos Maglaveras and Irini Lekka and Vassilis Koutkias and Theodoros Agorastos and Themistoklis Mikos and Antonios Tantsis},
title={ASSIST: Employing Inference and Semantic Technologies to Facilitate Association Studies on Cervical Cancer},
booktitle={6th European Symposium on Biomedical Engineering},
address={Chania, Crete, Greece},
year={2008},
month={01},
date={2008-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/esbmeassist.pdf},
abstract={Advances in biomedical engineering have lately facilitated medical data acquisition, leading to increased availability of both genetic and phenotypic patient. Particularly, in the area of cervical cancer intensive research investigates the role of specific genetic and environmental factors in determining the persistence of the HPV virus – which is the primary causal factor of cervical cancer – and the subsequent progression of the disease. To this direction, genetic association studies constitute a widely used scientific approach for medical research. However, despite the increased data availability worldwide, individual studies are often inconclusive due to the physical and conceptual isolation of the medical centers that limit the pool of data actually available to each researcher. ASSIST, an EU-funded research project, aims at facilitating medical research on cervical cancer by tackling these data isolation issues. To accomplish that, it virtually unifies multiple patient record repositories, physically located at different sites and subsequently employs inferencing techniques on the unified medical knowledge to enable the execution of cervical cancer related association studies that comprise both genotypic and phenotypic study factors, allowing medical researchers to perform more complex and reliable association studies on larger, high-quality datasets.}
}

(C)
Pericles A. Mitkas, Vassilis Koutkias, Andreas L. Symeonidis, Manolis Falelakis, Christos Diou, Irini Lekka, Anastasios Delopoulos, Theodoros Agorastos and Nicos Maglaveras
"Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach"
Proceedings of the International Congress of the European Federation for Medical Informatics (MIE08), Goteborg, Sweden, 2008 May
[Abstract][BibTex]

Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.

@inproceedings{Mitkas2008Association,
author={Pericles A. Mitkas and Vassilis Koutkias and Andreas L. Symeonidis and Manolis Falelakis and Christos Diou and Irini Lekka and Anastasios Delopoulos and Theodoros Agorastos and Nicos Maglaveras},
title={Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach},
booktitle={Proceedings of the International Congress of the European Federation for Medical Informatics (MIE08)},
address={Goteborg, Sweden},
year={2008},
month={05},
date={2008-05-25},
abstract={Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.}
}

(C)
Angelos Tsolakis, Manolis Falelakis and Anastasios Delopoulos
"A framework for efficient correspondence using feature interrelations"
19th International Conference on Pattern Recognition, 2008. ICPR 2008, pp. 1-4, IEEE, Tampa, FL, 2008 Dec
[Abstract][BibTex][pdf]

We propose a formulation for solving the point pattern correspondence problem, relying on transformation invariants. Our approach can accommodate any degree of descriptors thus modeling any kind of potential deformation according to the needs of each specific problem. Other potential descriptors such as color or local appearance can also be incorporated. A brief study on the complexity of the methodology is made which proves to be inherently polynomial while allowing for further adjustments via thresholding. Initial experiments on both synthetic and real data demonstrate its potentials in terms of accuracy and robustness to noise and outliers.

@inproceedings{Tsolakis2008Framework,
author={Angelos Tsolakis and Manolis Falelakis and Anastasios Delopoulos},
title={A framework for efficient correspondence using feature interrelations},
booktitle={19th International Conference on Pattern Recognition, 2008. ICPR 2008},
pages={1-4},
publisher={IEEE},
address={Tampa, FL},
year={2008},
month={12},
date={2008-12-08},
url={http://dx.doi.org/10.1109/ICPR.2008.4761227},
doi={http://10.1109/ICPR.2008.4761227},
abstract={We propose a formulation for solving the point pattern correspondence problem, relying on transformation invariants. Our approach can accommodate any degree of descriptors thus modeling any kind of potential deformation according to the needs of each specific problem. Other potential descriptors such as color or local appearance can also be incorporated. A brief study on the complexity of the methodology is made which proves to be inherently polynomial while allowing for further adjustments via thresholding. Initial experiments on both synthetic and real data demonstrate its potentials in terms of accuracy and robustness to noise and outliers.}
}

2006

(C)
Nikos Batalas, Christos Diou and Anastasios Delopoulos
"Efficient Indexing, Color Descriptors and Browsing in Image Databases"
1st International Workshop on Semantic Media Adaptation and Personalization (SMAP06), pp. 129-134, Athens, Greece, 2006 Jan
[Abstract][BibTex][pdf]

This work provides an experimental evaluation of various existing approaches for some of the major problems content based image retrieval applications are faced with. More specifically, global color representation, indexing and navigation methods are analyzed and insight is provided regarding their efficiency and applicability. Furthermore this paper proposes and evaluates the combined use of FastMap and kd-trees to enable accurate and fast retrieval in image databases.

@inproceedings{Batalas2006Efficient,
author={Nikos Batalas and Christos Diou and Anastasios Delopoulos},
title={Efficient Indexing, Color Descriptors and Browsing in Image Databases},
booktitle={1st International Workshop on Semantic Media Adaptation and Personalization (SMAP06)},
pages={129-134},
address={Athens, Greece},
year={2006},
month={01},
date={2006-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/smap06_01.pdf},
abstract={This work provides an experimental evaluation of various existing approaches for some of the major problems content based image retrieval applications are faced with. More specifically, global color representation, indexing and navigation methods are analyzed and insight is provided regarding their efficiency and applicability. Furthermore this paper proposes and evaluates the combined use of FastMap and kd-trees to enable accurate and fast retrieval in image databases.}
}

(C)
Christos Diou, Giorgos Katsikatsos and Anastasios Delopoulos
"Constructing Fuzzy Relations from WordNet for Word Sense Disambiguation"
Semantic Media Adaptation and Personalization, pp. 135-140, Athens, Greece, 2006 Dec
[Abstract][BibTex][pdf]

In this work, the problem of word sense disambiguation is formulated as a problem of imprecise associations between words and word senses in a textual context. The approach has two main parts. Initially, we consider that for each sense, a fuzzy set is given that provides the degrees of association between a number of words and the sense. An algorithm is provided that ranks the senses of a word in a text based on this information, effectively leading to word sense disambiguation. In the second part, a method based on WordNet is developed that constructs the fuzzy sets for the senses (independent of any text). Algorithms are provided that can help in both understanding and implementation of the proposed approach. Experimental results are satisfactory and show that modeling word sense disambiguation as a problem of imprecise associations is promising

@inproceedings{Diou2006Constructing,
author={Christos Diou and Giorgos Katsikatsos and Anastasios Delopoulos},
title={Constructing Fuzzy Relations from WordNet for Word Sense Disambiguation},
booktitle={Semantic Media Adaptation and Personalization},
pages={135-140},
address={Athens, Greece},
year={2006},
month={12},
date={2006-12-01},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/04041972.pdf},
doi={http://10.1109/SMAP.2006.14},
abstract={In this work, the problem of word sense disambiguation is formulated as a problem of imprecise associations between words and word senses in a textual context. The approach has two main parts. Initially, we consider that for each sense, a fuzzy set is given that provides the degrees of association between a number of words and the sense. An algorithm is provided that ranks the senses of a word in a text based on this information, effectively leading to word sense disambiguation. In the second part, a method based on WordNet is developed that constructs the fuzzy sets for the senses (independent of any text). Algorithms are provided that can help in both understanding and implementation of the proposed approach. Experimental results are satisfactory and show that modeling word sense disambiguation as a problem of imprecise associations is promising}
}

(C)
Christos Diou, Anastasia Manta and Anastasios Delopoulos
"Space-time tubes and motion representation"
Proceedings of the 3rd IFIP Conference on Artificial Intelligence Applications and Innovations (AIAI), Athens, Greece, 2006 Jan
[Abstract][BibTex][pdf]

Space-time tubes, a feature that can be used for analysis of motion based on the observed moving points in a scene is introduced. Information provided by sensors is used to detect moving points and based on their connectivity, tubes enable a structured approach towards identifying moving objects and high level events. It is shown that using tubes in conjunction with domain knowledge can overcome errors caused by the inaccuracy or inadequacy of the original motion information. The detected high level events can then be mapped to small natural language descriptions of object motion in the scene.

@inproceedings{Diou2006Space,
author={Christos Diou and Anastasia Manta and Anastasios Delopoulos},
title={Space-time tubes and motion representation},
booktitle={Proceedings of the 3rd IFIP Conference on Artificial Intelligence Applications and Innovations (AIAI)},
address={Athens, Greece},
year={2006},
month={01},
date={2006-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/10.1007_0-387-34224-9_68.pdf},
abstract={Space-time tubes, a feature that can be used for analysis of motion based on the observed moving points in a scene is introduced. Information provided by sensors is used to detect moving points and based on their connectivity, tubes enable a structured approach towards identifying moving objects and high level events. It is shown that using tubes in conjunction with domain knowledge can overcome errors caused by the inaccuracy or inadequacy of the original motion information. The detected high level events can then be mapped to small natural language descriptions of object motion in the scene.}
}

(C)
Pericles A. Mitkas, Anastasios N. Delopoulos, Andreas L. Symeonidis and Fotis E. Psomopoulos
"A Framework for Semantic Data Integration and Inferencing on Cervical Cancer"
Hellenic Bioinformatics and Medical Informatics Meeting, Biomedical Research Foundation, Academy of Athens, Greece, 2006 Oct
[Abstract][BibTex][pdf]

Advances in the area of biomedicine and bioengineering have allowed for more accurate and detailed data acquisition in the area of health care. Examinations that once were time- and cost-forbidding, are now available to public, providing physicians and clinicians with more patient data for diagnosis and successful treatment. These data are also used by medical researchers in order to perform association studies among environmental agents, virus characteristics and genetic attributes, extracting new and interesting risk markers which can be used to enhance early diagnosis and prognosis. Nevertheless, scientific progress is hindered by the fact that each medical center operates in relative isolation, regarding datasets and medical effort, since there is no universally accepted archetype/ontology for medical data acquisition, data storage and labeling. This, exactly, is the major goal of ASSIST: to virtually unify multiple patient record repositories, physically located at different laboratories, clinics and/or hospitals. ASSIST focuses on cervical cancer and implements a semantically-aware integration layer that unifies data in a seamless manner. Data privacy and security are ensured by techniques for data anonymization, secure data access and storage. Both the clinician as well as the medical researcher will have access to a knowledge base on cervical cancer and will be able to perform more complex and elaborate association studies on larger groups.

@inproceedings{Mitkas2006Framework,
author={Pericles A. Mitkas and Anastasios N. Delopoulos and Andreas L. Symeonidis and Fotis E. Psomopoulos},
title={A Framework for Semantic Data Integration and Inferencing on Cervical Cancer},
booktitle={Hellenic Bioinformatics and Medical Informatics Meeting},
address={Biomedical Research Foundation, Academy of Athens, Greece},
year={2006},
month={10},
date={2006-10-01},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/ASSISTBioacademy.pdf},
abstract={Advances in the area of biomedicine and bioengineering have allowed for more accurate and detailed data acquisition in the area of health care. Examinations that once were time- and cost-forbidding, are now available to public, providing physicians and clinicians with more patient data for diagnosis and successful treatment. These data are also used by medical researchers in order to perform association studies among environmental agents, virus characteristics and genetic attributes, extracting new and interesting risk markers which can be used to enhance early diagnosis and prognosis. Nevertheless, scientific progress is hindered by the fact that each medical center operates in relative isolation, regarding datasets and medical effort, since there is no universally accepted archetype/ontology for medical data acquisition, data storage and labeling. This, exactly, is the major goal of ASSIST: to virtually unify multiple patient record repositories, physically located at different laboratories, clinics and/or hospitals. ASSIST focuses on cervical cancer and implements a semantically-aware integration layer that unifies data in a seamless manner. Data privacy and security are ensured by techniques for data anonymization, secure data access and storage. Both the clinician as well as the medical researcher will have access to a knowledge base on cervical cancer and will be able to perform more complex and elaborate association studies on larger groups.}
}

2005

(C)
Niki Aifanti and Anastasios Delopoulos
"Fuzzy-logic Based Information Fusion for Image Segmentation"
IEEE International Conference on Image Processing 2005, 2005 Sep
[Abstract][BibTex][pdf]

This work presents an information fusion mechanism for image segmentation using multiple cues. Initially, a fuzzy clustering of each cue space is performed and corresponding membership functions are produced on the image coordinates space. The latter include complementary as well as redundant information. A fuzzy inference mechanism is developed, which exploits these characteristics and fuses the membership functions. The produced aggregate membership functions represent objects, which bear combinations of the properties specified by the cues. The segmented image results after post-processing and defuzzification, which involves majority voting. A fuzzy rule based merging algorithm is finally proposed for reducing possible oversegmentation. Experimental results have been included to illustrate the steps and the efficiency of the algorithm.

@inproceedings{Aifanti2005Fuzzy,
author={Niki Aifanti and Anastasios Delopoulos},
title={Fuzzy-logic Based Information Fusion for Image Segmentation},
booktitle={IEEE International Conference on Image Processing 2005},
year={2005},
month={09},
date={2005-09-01},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/01530279.pdf},
doi={http://10.1109/ICIP.2005.1530279},
abstract={This work presents an information fusion mechanism for image segmentation using multiple cues. Initially, a fuzzy clustering of each cue space is performed and corresponding membership functions are produced on the image coordinates space. The latter include complementary as well as redundant information. A fuzzy inference mechanism is developed, which exploits these characteristics and fuses the membership functions. The produced aggregate membership functions represent objects, which bear combinations of the properties specified by the cues. The segmented image results after post-processing and defuzzification, which involves majority voting. A fuzzy rule based merging algorithm is finally proposed for reducing possible oversegmentation. Experimental results have been included to illustrate the steps and the efficiency of the algorithm.}
}

(C)
Christos Diou, Manolis Falelakis and Anastasios Delopoulos
"Knowledge Based Unification of Medical Archives"
International Networking Conference (INC2005), Samos, Greece, 2005 Jan
[Abstract][BibTex]

@inproceedings{Diou2005Knowledge,
author={Christos Diou and Manolis Falelakis and Anastasios Delopoulos},
title={Knowledge Based Unification of Medical Archives},
booktitle={International Networking Conference (INC2005)},
address={Samos, Greece},
year={2005},
month={01},
date={2005-01-01}
}

(C)
Manolis Falelakis, Christos Diou, Anastasios Valsamidis and Anastasios Delopoulos
"Complexity Control in Semantic Identification"
IEEE International Conference on Fuzzy Systems (FUZZ-IEEE05), pp. 102-107, Reno, Nevada, USA, 2005 Jan
[Abstract][BibTex][pdf]

This paper proposes a methodology for modeling the process of semantic identification and controlling its complexity and accuracy of the results. Each semantic entity is defined in terms of lower level semantic entities and low level features that can be automatically extracted, while different membership degrees are assigned to each one of the entities participating in a definition, depending on their importance for the identification. By selecting only a subset of the features that are used to define a semantic entity both complexity and accuracy of the results are reduced. It is possible, however, to design the identification using the metrics introduced, so that satisfactory results are obtained, while complexity remains below some required limit.

@inproceedings{Falelakis2005Complexity,
author={Manolis Falelakis and Christos Diou and Anastasios Valsamidis and Anastasios Delopoulos},
title={Complexity Control in Semantic Identification},
booktitle={IEEE International Conference on Fuzzy Systems (FUZZ-IEEE05)},
pages={102-107},
address={Reno, Nevada, USA},
year={2005},
month={01},
date={2005-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/Falelakis-Diou-Valsamidis-Delopoulos_FUZZIEEE05_paper1.pdf},
doi={http://10.1504/IJISTA.2006.009907},
abstract={This paper proposes a methodology for modeling the process of semantic identification and controlling its complexity and accuracy of the results. Each semantic entity is defined in terms of lower level semantic entities and low level features that can be automatically extracted, while different membership degrees are assigned to each one of the entities participating in a definition, depending on their importance for the identification. By selecting only a subset of the features that are used to define a semantic entity both complexity and accuracy of the results are reduced. It is possible, however, to design the identification using the metrics introduced, so that satisfactory results are obtained, while complexity remains below some required limit.}
}

(C)
Manolis Falelakis, Christos Diou, Anastasios Valsamidis and Anastasios Delopoulos
"Dynamic Semantic Identification with Complexity Constraints as a Knapsack Problem"
The 14th IEEE International Conference on Fuzzy Systems, 2005. FUZZ '05., IEEE, 2005 May
[Abstract][BibTex][pdf]

The process of automatic identification of high level semantic entities (e.g., objects, concepts or events) in multimedia documents requires processing by means of algorithms that are used for feature extraction, i.e. low level information needed for the analysis of these documents at a semantic level. This work copes with the high and often prohibitive computational complexity of this procedure. Emphasis is given to a dynamic scheme that allows for efficient distribution of the available computational resources in application. Scenarios that deal with the identification of multiple high level entities with strict simultaneous restrictions, such as real time applications.

@inproceedings{Falelakis2005Dynamic,
author={Manolis Falelakis and Christos Diou and Anastasios Valsamidis and Anastasios Delopoulos},
title={Dynamic Semantic Identification with Complexity Constraints as a Knapsack Problem},
booktitle={The 14th IEEE International Conference on Fuzzy Systems, 2005. FUZZ '05.},
publisher={IEEE},
year={2005},
month={05},
date={2005-05-25},
url={http://mug.ee.auth.gr/wp-content/uploads/Falelakis-Diou-Valsamidis-Delopoulos_FUZZIEEE05_paper2.pdf},
doi={http://10.1109/FUZZY.2005.1452456},
abstract={The process of automatic identification of high level semantic entities (e.g., objects, concepts or events) in multimedia documents requires processing by means of algorithms that are used for feature extraction, i.e. low level information needed for the analysis of these documents at a semantic level. This work copes with the high and often prohibitive computational complexity of this procedure. Emphasis is given to a dynamic scheme that allows for efficient distribution of the available computational resources in application. Scenarios that deal with the identification of multiple high level entities with strict simultaneous restrictions, such as real time applications.}
}

(C)
Manolis Falelakis, Christos Diou, Manolis Wallace and Anastasios Delopoulos
"Minimizing Uncertainty In Semantic Identification When Computing Resources Are Limited"
International Conference on Artificial Neural Networks (ICANN05), pp. 817-822, Springer Berlin Heidelberg, Warsaw, Poland, 2005 Jan
[Abstract][BibTex][pdf]

In this paper we examine the problem of automatic semantic identi?cation of entities in multimedia documents from a computing point of view. Speci?cally, we identify as main points to consider the storage of the required knowledge and the computational complexity of the handling of the knowledge as well as of the actual identi?cation process. In order to tackle the above we utilize (i) a sparse representation model for storage, (ii) a novel transitive closure algorithm for handling and (iii) a novel approach to identi?cation that allows for the speci?cation of computational boundaries.

@inproceedings{Falelakis2005Minimizing,
author={Manolis Falelakis and Christos Diou and Manolis Wallace and Anastasios Delopoulos},
title={Minimizing Uncertainty In Semantic Identification When Computing Resources Are Limited},
booktitle={International Conference on Artificial Neural Networks (ICANN05)},
pages={817-822},
publisher={Springer Berlin Heidelberg},
address={Warsaw, Poland},
year={2005},
month={01},
date={2005-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/10.1007_11550907_129.pdf},
doi={http://10.1007/11550907_129},
abstract={In this paper we examine the problem of automatic semantic identi?cation of entities in multimedia documents from a computing point of view. Speci?cally, we identify as main points to consider the storage of the required knowledge and the computational complexity of the handling of the knowledge as well as of the actual identi?cation process. In order to tackle the above we utilize (i) a sparse representation model for storage, (ii) a novel transitive closure algorithm for handling and (iii) a novel approach to identi?cation that allows for the speci?cation of computational boundaries.}
}

2004

(C)
Manolis Falelakis, Christos Diou and Anastasios Delopoulos
"Identification of Semantics: Balancing between Complexity and Validity"
2004 IEEE 6th Workshop on Multimedia Signal Processing, pp. 434-437, IEEE, Siena, Italy, 2004 Sep
[Abstract][BibTex][pdf]

This paper addresses the problem of identifying semantic entities (e.g., events, objects, concepts etc.) in a particular environment (e.g., a multimedia document, a scene, a signal etc.) by means of an appropriately modelled semantic encyclopedia. Each semantic entity in the encyclopedia is defined in terms of other semantic entities as well as low level features, which we call syntactic entities, in a hierarchical scheme. Furthermore, a methodology is introduced, which can be used to evaluate the direct contribution of every syntactic feature of the document to the identification of semantic entities. This information allows us to estimate the quality of the result as well as the required computational cost of the search procedure and to balance between them. Our approach could be particularly important in real time and/or bulky search/indexing applications.

@inproceedings{Falelakis2004Identification,
author={Manolis Falelakis and Christos Diou and Anastasios Delopoulos},
title={Identification of Semantics: Balancing between Complexity and Validity},
booktitle={2004 IEEE 6th Workshop on Multimedia Signal Processing},
pages={434-437},
publisher={IEEE},
address={Siena, Italy},
year={2004},
month={09},
date={2004-09-29},
url={http://mug.ee.auth.gr/wp-content/uploads/Falelakis-Diou-Delopoulos_MMSP04.pdf},
doi={http://10.1109/MMSP.2004.1436588},
abstract={This paper addresses the problem of identifying semantic entities (e.g., events, objects, concepts etc.) in a particular environment (e.g., a multimedia document, a scene, a signal etc.) by means of an appropriately modelled semantic encyclopedia. Each semantic entity in the encyclopedia is defined in terms of other semantic entities as well as low level features, which we call syntactic entities, in a hierarchical scheme. Furthermore, a methodology is introduced, which can be used to evaluate the direct contribution of every syntactic feature of the document to the identification of semantic entities. This information allows us to estimate the quality of the result as well as the required computational cost of the search procedure and to balance between them. Our approach could be particularly important in real time and/or bulky search/indexing applications.}
}

(C)
Panagiotis Panagiotopoulos, Manolis Falelakis and Anastasios Delopoulos
"Efficient Semantic Search Using Finite Automata"
6th COST 276 Workshop on Information and Knowledge Management for Integrated Media Communication, 2004 Jan
[Abstract][BibTex][pdf]

An efficient scheme for identifying Semantic Entities within data sets such as multimedia documents, scenes, signals etc. is proposed in this work. Expression of Semantic Entities in terms of Syntactic Properties is proved to be isomorphic to appropriately defined finite automata, which also model the identification procedure. Based on the structure and properties of these automata, formal definitions of attained Validity and Certainty and also required Complexity are defined as metrics of identification efficiency. The main contribution of the paper relies on organizing the identification and search procedure in a way that maximizes its validity for bounded Complexity budgets and reversely minimizes computational Complexity for a given required Validity threshold.

@inproceedings{Panagiotopoulos2004Efficient,
author={Panagiotis Panagiotopoulos and Manolis Falelakis and Anastasios Delopoulos},
title={Efficient Semantic Search Using Finite Automata},
booktitle={6th COST 276 Workshop on Information and Knowledge Management for Integrated Media Communication},
year={2004},
month={01},
date={2004-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/panagiotopoulos-falelakis-delopoulos-cost276.pdf},
abstract={An efficient scheme for identifying Semantic Entities within data sets such as multimedia documents, scenes, signals etc. is proposed in this work. Expression of Semantic Entities in terms of Syntactic Properties is proved to be isomorphic to appropriately defined finite automata, which also model the identification procedure. Based on the structure and properties of these automata, formal definitions of attained Validity and Certainty and also required Complexity are defined as metrics of identification efficiency. The main contribution of the paper relies on organizing the identification and search procedure in a way that maximizes its validity for bounded Complexity budgets and reversely minimizes computational Complexity for a given required Validity threshold.}
}

2002

(C)
Yannis Avrithis, Giorgos Stamou, Anastasios Delopoulos and Stefanos Kollias
"Intelligent Semantic Access to Audiovisual Content"
2nd Hellenic Conference on Artificial Intelligence (SETN-02), pp. 11-12, 2002 Jan
[Abstract][BibTex][pdf]

In this paper, an integrated information system is presented that offers enhanced search and retrieval capabilities to users of heterogeneous digital audiovisual (a/v) archives. This novel system exploits the advances in handling a/v content and related metadata, as introduced by MPEG-4 and worked out by MPEG-7, to offer advanced access services characterized by the tri-fold “semantic phrasing of the request (query)”, “unified handling ” and “personalized response”. The proposed system is targeting the intelligent extraction of semantic information from a/v and text related data taking into account the nature of useful queries that users may issue, and the context determined by user profiles. From a technical point of view, it will play the role of an intermediate access server residing between the end users and multiple heterogeneous audiovisual archives organized according to new MPEG standards.

@inproceedings{Avrithis2002Intelligent,
author={Yannis Avrithis and Giorgos Stamou and Anastasios Delopoulos and Stefanos Kollias},
title={Intelligent Semantic Access to Audiovisual Content},
booktitle={2nd Hellenic Conference on Artificial Intelligence (SETN-02)},
pages={11-12},
year={2002},
month={01},
date={2002-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/10.1007_3-540-46014-4_20.pdf},
abstract={In this paper, an integrated information system is presented that offers enhanced search and retrieval capabilities to users of heterogeneous digital audiovisual (a/v) archives. This novel system exploits the advances in handling a/v content and related metadata, as introduced by MPEG-4 and worked out by MPEG-7, to offer advanced access services characterized by the tri-fold “semantic phrasing of the request (query)”, “unified handling ” and “personalized response”. The proposed system is targeting the intelligent extraction of semantic information from a/v and text related data taking into account the nature of useful queries that users may issue, and the context determined by user profiles. From a technical point of view, it will play the role of an intermediate access server residing between the end users and multiple heterogeneous audiovisual archives organized according to new MPEG standards.}
}

2001

(C)
Giorgos Akrivas, Spiros Ioannou, Elias Karakoulakis, Kostas Karpouzis, Yannis Avrithis, Anastasios Delopoulos, Stefanos Kollias, Iraklis Varlamis and Michalis Vaziriannis
"An Intelligent System for Retrieval and Mining of Audiovisual Material Based on the MPEG-7 Description Schemes"
Proceedings of European Symposium on Intelligent Technologies, Hybrid Systems and their implementation on Smart Adaptive Systems, Tenerife, Spain, 2001 Dec
[Abstract][BibTex][pdf]

audiovisual archives, multimedia databases, multimedia description schemes, retrieval and mining of audiovisual data

@inproceedings{Akrivas2001Intelligent,
author={Giorgos Akrivas and Spiros Ioannou and Elias Karakoulakis and Kostas Karpouzis and Yannis Avrithis and Anastasios Delopoulos and Stefanos Kollias and Iraklis Varlamis and Michalis Vaziriannis},
title={An Intelligent System for Retrieval and Mining of Audiovisual Material Based on the MPEG-7 Description Schemes},
booktitle={Proceedings of European Symposium on Intelligent Technologies, Hybrid Systems and their implementation on Smart Adaptive Systems},
address={Tenerife, Spain},
year={2001},
month={12},
date={2001-12-01},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/182.pdf},
abstract={audiovisual archives, multimedia databases, multimedia description schemes, retrieval and mining of audiovisual data}
}

(C)
Fotini-Niovi Pavlidou and Christos Papachristou
"Collision-Free Operation in Ad Hoc Carrier Sense Multiple Access Wireless Networks"
International Symposium on 3rd Generation Infrastructures and Services (3GIS), Athens, Greece, 2001 Jan
[Abstract][BibTex]

@inproceedings{Pavlidou2001Collision,
author={Fotini-Niovi Pavlidou and Christos Papachristou},
title={Collision-Free Operation in Ad Hoc Carrier Sense Multiple Access Wireless Networks},
booktitle={International Symposium on 3rd Generation Infrastructures and Services (3GIS)},
address={Athens, Greece},
year={2001},
month={01},
date={2001-01-01}
}

(C)
Gabriel Tsechpenakis, Yiannis Xirouhakis and Anastasios Delopoulos
"A Multiresolution Approach for Main Mobile Object Localization in Video Sequences"
International Workshop on Very Low Bitrate Video Coding, 2001 Nov
[Abstract][BibTex][pdf]

Main mobile object localization is a task that emerges in research fields such us video understanding, object-based coding and various related applications, such as content- based retrieval, remote surveillance and object recognition. The present work revisits mobile object localization in the context of content-based retrieval schemes and the related MPEG-7 framework, for natural and synthetic, indoor and outdoor sequences, when either a static or a mobile camera is utilized. The proposed multiresolution approach greatly improves the trade-off between accuracy and time- performance leading to satisfactory results with a considerably low amount of computations. Moreover, based on the point gatherings extracted in (14), the bounding polygon and the direction of movement are estimated for each mobile object; thus yielding an adequate representation in the MPEG-7 sense. Finally, the resulting polygons can be used as appropriate initial estimates for methods that extract object contours, e.g. curve propagation approaches, such as (8,10) which utilize the level-set method . Experimental results over a number of distinct natural sequences have been included to illustrate the performance of the proposed approach.

@inproceedings{Tsechpenakis2001Multiresolution,
author={Gabriel Tsechpenakis and Yiannis Xirouhakis and Anastasios Delopoulos},
title={A Multiresolution Approach for Main Mobile Object Localization in Video Sequences},
booktitle={International Workshop on Very Low Bitrate Video Coding},
year={2001},
month={11},
date={2001-11-01},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/VLBVtesch01.pdf},
abstract={Main mobile object localization is a task that emerges in research fields such us video understanding, object-based coding and various related applications, such as content- based retrieval, remote surveillance and object recognition. The present work revisits mobile object localization in the context of content-based retrieval schemes and the related MPEG-7 framework, for natural and synthetic, indoor and outdoor sequences, when either a static or a mobile camera is utilized. The proposed multiresolution approach greatly improves the trade-off between accuracy and time- performance leading to satisfactory results with a considerably low amount of computations. Moreover, based on the point gatherings extracted in (14), the bounding polygon and the direction of movement are estimated for each mobile object; thus yielding an adequate representation in the MPEG-7 sense. Finally, the resulting polygons can be used as appropriate initial estimates for methods that extract object contours, e.g. curve propagation approaches, such as (8,10) which utilize the level-set method . Experimental results over a number of distinct natural sequences have been included to illustrate the performance of the proposed approach.}
}

2000

(C)
Athanasios Drosopoulos, Yiannis Xirouhakis and Anastasios Delopoulos
"Optical Camera Tracking in Virtual Studios: Degenerate Cases"
Pattern Recognition, pp. 1114-1117, IEEE, Barcelona, 2000 Sep
[Abstract][BibTex][pdf]

Over the past few years, virtual studios applications have significantly attracted the attention of the entertainment industry. Optical tracking systems for virtual sets production have become particularly popular tending to substitute electro-mechanical ones. In this work, an existing optical tracking system is revisited, in order to tackle with inherent degenerate cases; namely, reduction of the perspective projection model to the orthographic one and blurring of the blue screen. In this context, we propose a simple algorithm for 3D motion estimation under orthography using 3D-to-2D line correspondences. In addition, the watershed algorithm is employed for successful feature extraction in the presence of defocus or motion blur.

@inproceedings{Drosopoulos2000Optical,
author={Athanasios Drosopoulos and Yiannis Xirouhakis and Anastasios Delopoulos},
title={Optical Camera Tracking in Virtual Studios: Degenerate Cases},
booktitle={Pattern Recognition},
pages={1114-1117},
publisher={IEEE},
address={Barcelona},
year={2000},
month={09},
date={2000-09-03},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/00903741.pdf},
abstract={Over the past few years, virtual studios applications have significantly attracted the attention of the entertainment industry. Optical tracking systems for virtual sets production have become particularly popular tending to substitute electro-mechanical ones. In this work, an existing optical tracking system is revisited, in order to tackle with inherent degenerate cases; namely, reduction of the perspective projection model to the orthographic one and blurring of the blue screen. In this context, we propose a simple algorithm for 3D motion estimation under orthography using 3D-to-2D line correspondences. In addition, the watershed algorithm is employed for successful feature extraction in the presence of defocus or motion blur.}
}

(C)
Gabriel Tsechpenakis, Yiannis Xirouhakis and Anastasios Delopoulos
"Main Mobile Object Detection and Localization in Video Sequences"
Advances in Visual Information Systems, pp. 84-95, Lyon, France, 2000 Nov
[Abstract][BibTex][pdf]

Main mobile object detection and localization is a task of major importance in the fields of video understanding, object-based coding and numerous related applications, such as content-based retrieval, remote surveillance and object recognition. The present work revisits the algorithm proposed in [13] for mobile object localization in both indoor and outdoor sequences when either a static or a mobile camera is utilized. The proposed approach greatly improves the trade-off between accuracy and time-performance leading to satisfactory results with a considerably low amount of computations. Moreover, based on the point gatherings extracted in [13], the bounding polygon and the direction of movement are estimated for each mobile object; thus yielding an adequate representation in the MPEG-7 sense. Experimental results over a number of distinct natural sequences have been included to illustrate the performance of the proposed approach.

@inproceedings{Tsechpenakis2000Main,
author={Gabriel Tsechpenakis and Yiannis Xirouhakis and Anastasios Delopoulos},
title={Main Mobile Object Detection and Localization in Video Sequences},
booktitle={Advances in Visual Information Systems},
pages={84-95},
address={Lyon, France},
year={2000},
month={11},
date={2000-11-02},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/mainmobiletsech00.pdf},
doi={http://10.1007/3-540-40053-2_8},
abstract={Main mobile object detection and localization is a task of major importance in the fields of video understanding, object-based coding and numerous related applications, such as content-based retrieval, remote surveillance and object recognition. The present work revisits the algorithm proposed in [13] for mobile object localization in both indoor and outdoor sequences when either a static or a mobile camera is utilized. The proposed approach greatly improves the trade-off between accuracy and time-performance leading to satisfactory results with a considerably low amount of computations. Moreover, based on the point gatherings extracted in [13], the bounding polygon and the direction of movement are estimated for each mobile object; thus yielding an adequate representation in the MPEG-7 sense. Experimental results over a number of distinct natural sequences have been included to illustrate the performance of the proposed approach.}
}

1999

(C)
Yiannis Xirouhakis, George Votsis and Anastasios Delopoulos
"Estimation of 3D Motion and Structure of Human Faces"
Advances in Intelligent Systems: Concepts, Tools and Applications, pp. 333-344, Springer Netherlands, 1999 Jan
[Abstract][BibTex][pdf]

The extraction of motion and shape information of three dimensional objects from video sequences emerges in various applications especially within the framework of the MPEG-4 and MPEG-7 standards. Particular attention has been given to this problem within the scope of model-based coding and knowledge-based ZD modeling. In this chapter, a novel algorithm is proposed for the 3D reconstruction of a human face from 2D projections. The obtained results can contribute to several fields with an emphasis on 3D modeling and characterization of human faces.

@inproceedings{Xirouhakis1999Estimation,
author={Yiannis Xirouhakis and George Votsis and Anastasios Delopoulos},
title={Estimation of 3D Motion and Structure of Human Faces},
booktitle={Advances in Intelligent Systems: Concepts, Tools and Applications},
pages={333-344},
publisher={Springer Netherlands},
year={1999},
month={01},
date={1999-01-01},
url={http://dx.doi.org/10.1007/978-94-011-4840-5_30},
doi={http://10.1007/978-94-011-4840-5_30},
abstract={The extraction of motion and shape information of three dimensional objects from video sequences emerges in various applications especially within the framework of the MPEG-4 and MPEG-7 standards. Particular attention has been given to this problem within the scope of model-based coding and knowledge-based ZD modeling. In this chapter, a novel algorithm is proposed for the 3D reconstruction of a human face from 2D projections. The obtained results can contribute to several fields with an emphasis on 3D modeling and characterization of human faces.}
}

(C)
Yiannis Xirouhakis, Gabriel Tsechpenakis and Anastasios Delopoulos
"User Choices for Efficient 3D Motion and Shape Extraction from Orthographic Projections"
The 6th IEEE International Conference on Electronics, Circuits and Systems, 1999. Proceedings of ICECS '99, pp. 1261- 264, Pafos, 1999 Sep
[Abstract][BibTex][pdf]

The extraction of structure-from-motion emerges in several research fields such as computer vision, video coding, biomedical engineering and human-computer interaction. The present work focuses on the algorithmic approach of structure-from-motion extraction under orthography providing, at the same time, guidelines in matters of implementation. Relative principles, constraints and stability are discussed. The improvement of the algorithm\'s performance w.r.t. the proposed user-choices is illustrated by means of experimental results.

@inproceedings{Xirouhakis1999User,
author={Yiannis Xirouhakis and Gabriel Tsechpenakis and Anastasios Delopoulos},
title={User Choices for Efficient 3D Motion and Shape Extraction from Orthographic Projections},
booktitle={The 6th IEEE International Conference on Electronics, Circuits and Systems, 1999. Proceedings of ICECS '99},
pages={1261- 264},
address={Pafos},
year={1999},
month={09},
date={1999-09-05},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/00814398.pdf},
doi={http://10.1109/ICECS.1999.814398},
abstract={The extraction of structure-from-motion emerges in several research fields such as computer vision, video coding, biomedical engineering and human-computer interaction. The present work focuses on the algorithmic approach of structure-from-motion extraction under orthography providing, at the same time, guidelines in matters of implementation. Relative principles, constraints and stability are discussed. The improvement of the algorithm\\'s performance w.r.t. the proposed user-choices is illustrated by means of experimental results.}
}

1998

(C)
Yannis S. Avrithis, Anastasios N. Delopoulos and Vassilios N. Alexopoulos
"Ultrasonic Array Imaging Using CDMA Techniques"
9th European Signal Processing Conference (EUSIPCO 1998), Rhodes, 1998 Sep
[Abstract][BibTex][pdf]

A new method for designing ultrasonic imaging systems is presented in this paper. The method is based on the use of transducer arrays whose elements transmit wideband signals generated by pseudo-random codes, similarly to code division multiple access (CDMA) systems in communications. The use of code sequences instead of pulses, which are typically used in conventional phased arrays, combined with transmit and receive beamforming for steering different codes at each direction, permits parallel acquisition of a large number of measurements corresponding to different directions. Significantly higher image acquisition rate as well as lateral and contrast resolution are thus obtained, while axial resolution remains close to that of phased arrays operating in pulse-echo mode. Time and frequency division techniques are also studied and a unified theoretical model is derived, which is validated by experimental results.

@inproceedings{Avrithis1998Ultrasonic,
author={Yannis S. Avrithis and Anastasios N. Delopoulos and Vassilios N. Alexopoulos},
title={Ultrasonic Array Imaging Using CDMA Techniques},
booktitle={9th European Signal Processing Conference (EUSIPCO 1998)},
address={Rhodes},
year={1998},
month={09},
date={1998-09-08},
url={http://mug.ee.auth.gr/wp-content/uploads/Ultrasonic-Array-Imaging-Using-CDMA-Techniques-Y.-Avrithis1998.pdf},
abstract={A new method for designing ultrasonic imaging systems is presented in this paper. The method is based on the use of transducer arrays whose elements transmit wideband signals generated by pseudo-random codes, similarly to code division multiple access (CDMA) systems in communications. The use of code sequences instead of pulses, which are typically used in conventional phased arrays, combined with transmit and receive beamforming for steering different codes at each direction, permits parallel acquisition of a large number of measurements corresponding to different directions. Significantly higher image acquisition rate as well as lateral and contrast resolution are thus obtained, while axial resolution remains close to that of phased arrays operating in pulse-echo mode. Time and frequency division techniques are also studied and a unified theoretical model is derived, which is validated by experimental results.}
}

(C)
Anastasios Delopoulos, Yiannis XirouhakisRobust Estimation of Motion and Shape based on Orthographic Projections of Rigid Objects
"Robust Estimation of Motion and Shape based on Orthographic Projections of Rigid Objects"
IEEE Tenth Image and Multidimensional Signal Processing Workshop - IMDSP'98, IEEE, 1998 Jan
[Abstract][BibTex]

@inproceedings{Delopoulos1998Robust,
author={Anastasios Delopoulos and Yiannis XirouhakisRobust Estimation of Motion and Shape based on Orthographic Projections of Rigid Objects},
title={Robust Estimation of Motion and Shape based on Orthographic Projections of Rigid Objects},
booktitle={IEEE Tenth Image and Multidimensional Signal Processing Workshop - IMDSP'98},
publisher={IEEE},
year={1998},
month={01},
date={1998-01-01}
}

(C)
Anastasios Doulamis, Nikolaos Doulamis and Anastasios Delopoulos
"Optimal Subband Analysis Filters Compensating for Quantization and Additive Noise"
9th European Signal Processing Conference (EUSIPCO 1998, IEEE, Rhodes, 1998 Sep
[Abstract][BibTex][pdf]

In this paper, we present an analysis filter design technique which optimally defines the proper decimator so that the quantization noise is compensated. The analysis is based on a distortion criterion minimization using the Lagrange multipliers. The optimal decimation filters are derived through a Ricatti solution which involves both the quantization and the interpolation filters. Experimental results are presented indicating the good performance of the proposed technique versus conventional subband filter banks in the presence of quantization noise.

@inproceedings{Doulamis1998Optimal,
author={Anastasios Doulamis and Nikolaos Doulamis and Anastasios Delopoulos},
title={Optimal Subband Analysis Filters Compensating for Quantization and Additive Noise},
booktitle={9th European Signal Processing Conference (EUSIPCO 1998},
publisher={IEEE},
address={Rhodes},
year={1998},
month={09},
date={1998-09-08},
url={http://mug.ee.auth.gr/wp-content/uploads/DOUL638.pdf},
abstract={In this paper, we present an analysis filter design technique which optimally defines the proper decimator so that the quantization noise is compensated. The analysis is based on a distortion criterion minimization using the Lagrange multipliers. The optimal decimation filters are derived through a Ricatti solution which involves both the quantization and the interpolation filters. Experimental results are presented indicating the good performance of the proposed technique versus conventional subband filter banks in the presence of quantization noise.}
}

1997

(C)
Anastasios Delopoulos and Maria Rangoussi
"Cumulants of a Multidimensional Process Observed at Rationally Related Resolutions"
International Workshop on Sampling Theory and Applications, Aveiro, Protugal, 1997 Jan
[Abstract][BibTex]

@inproceedings{Delopoulos1997Cumulants,
author={Anastasios Delopoulos and Maria Rangoussi},
title={Cumulants of a Multidimensional Process Observed at Rationally Related Resolutions},
booktitle={International Workshop on Sampling Theory and Applications},
address={Aveiro, Protugal},
year={1997},
month={01},
date={1997-01-01}
}

(C)
Anastasios Delopoulos and Maria Rangoussi
"The Fractal Behaviour of Unvoiced Plosives: A Means for Classification"
5th European Conference on Speech Communication and Technology (EUROSPEECH'97), Rhodes, Greece, 1997 Jan
[Abstract][BibTex]

@inproceedings{Delopoulos1997Fractal,
author={Anastasios Delopoulos and Maria Rangoussi},
title={The Fractal Behaviour of Unvoiced Plosives: A Means for Classification},
booktitle={5th European Conference on Speech Communication and Technology (EUROSPEECH'97)},
address={Rhodes, Greece},
year={1997},
month={01},
date={1997-01-01}
}

(C)
Anastasios Delopoulos, Maria Rangoussi and Demetrios Kalogeras
"Fractional Sampling Rate Conversion in the 3rd Order Cumulant domain and Applications"
1997 13th International Conference on Digital Signal Processing Proceedings, 1997. DSP 97, pp. 157-160, IEEE, Santorini, Greece, 1997 Jul
[Abstract][BibTex][pdf]

In a variety of problems a random process is observed at different resolutions while knowledge of the corresponding scale conversion ratio usually contains useful information related to problem-specific quantities. A method is proposed which exploits cumulant domain relations of such signals in order to yield fractional estimates of the unknown conversion ratio. The noise insensitivity and shift invariance property of the cumulants offers advantages to the proposed method over signal domain alternatives. These advantages are discussed in two classes of practical problems involving 1-D and 2-D scale converted signals.

@inproceedings{Delopoulos1997Fractional,
author={Anastasios Delopoulos and Maria Rangoussi and Demetrios Kalogeras},
title={Fractional Sampling Rate Conversion in the 3rd Order Cumulant domain and Applications},
booktitle={1997 13th International Conference on Digital Signal Processing Proceedings, 1997. DSP 97},
pages={157-160},
publisher={IEEE},
address={Santorini, Greece},
year={1997},
month={07},
date={1997-07-02},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/00628001.pdf},
doi={http://10.1109/ICDSP.1997.628001},
abstract={In a variety of problems a random process is observed at different resolutions while knowledge of the corresponding scale conversion ratio usually contains useful information related to problem-specific quantities. A method is proposed which exploits cumulant domain relations of such signals in order to yield fractional estimates of the unknown conversion ratio. The noise insensitivity and shift invariance property of the cumulants offers advantages to the proposed method over signal domain alternatives. These advantages are discussed in two classes of practical problems involving 1-D and 2-D scale converted signals.}
}

1996

(C)
Vassilios Alexopoulos, Anastasios Delopoulos and Stefanos Kollias
"Towards a Standardization for Medical Video Encoding and Archiving"
14th International EuroPacs Meeting, Heraklion, Crete, Greece, 1996 Jan
[Abstract][BibTex]

@inproceedings{Alexopoulos1996Towards,
author={Vassilios Alexopoulos and Anastasios Delopoulos and Stefanos Kollias},
title={Towards a Standardization for Medical Video Encoding and Archiving},
booktitle={14th International EuroPacs Meeting},
address={Heraklion, Crete, Greece},
year={1996},
month={01},
date={1996-01-01}
}

(C)
Anastasios Delopoulos, Dimitrios Kalogeras, Vassilios Alexopoulos and Stefanos Kollias
"Real Time MPEG-1 Video Transmission over Local Area Networks"
Multimedia Communications and Video Coding, pp. 47-55, Berlin, Germany, 1996 Oct
[Abstract][BibTex][pdf]

In this work is presented the architecture of an MPEG-1 stream transmission system appropriate for point-to-point transfer of live video and audio over TCP/IP local area networks. The hardware and software modules of the system are presented as well. Experimental results on the statistical behavior of the generated and transmitted MPEG-1 stream are quoted.

@inproceedings{Delopoulos1996Real,
author={Anastasios Delopoulos and Dimitrios Kalogeras and Vassilios Alexopoulos and Stefanos Kollias},
title={Real Time MPEG-1 Video Transmission over Local Area Networks},
booktitle={Multimedia Communications and Video Coding},
pages={47-55},
address={Berlin, Germany},
year={1996},
month={10},
date={1996-10-07},
url={http://dx.doi.org/10.1007/978-1-4613-0403-6_7},
doi={http://10.1007/978-1-4613-0403-6_7},
abstract={In this work is presented the architecture of an MPEG-1 stream transmission system appropriate for point-to-point transfer of live video and audio over TCP/IP local area networks. The hardware and software modules of the system are presented as well. Experimental results on the statistical behavior of the generated and transmitted MPEG-1 stream are quoted.}
}

(C)
Anastasios Delopoulos, Maria Rangoussi and Janne Andersen
"Recognition of voiced speech from the bispectrum"
8th European Signal Processing Conference, 1996. EUSIPCO 1996., IEEE, Trieste, Italy, 1996 Jan
[Abstract][BibTex][pdf]

Recognition of voiced speech phonemes is addressed in this paper using features extracted from the bispectrum of the speech signal. Voiced speech is modeled as a superposition of coupled harmonics, located at frequencies that are multiples of the pitch and modulated by the vocal tract. For this type of signal, nonzero bispectral values are shown to be guaranteed by the estimation procedure employed. The vocal tract frequency response is reconstructed from the bispectrum on a set of frequency points that are multiples of the pitch. An AR model is next fitted on this transfer function. The AR coefficients are used as the feature vector for the subsequent classification step. Any finite dimension vector classifier can be employed at this point. Experiments using the LVQ neural classifier give satisfactory classification scores on real speech data, extracted from the DARPA/TIMIT speech corpus.

@inproceedings{Delopoulos1996Recognition,
author={Anastasios Delopoulos and Maria Rangoussi and Janne Andersen},
title={Recognition of voiced speech from the bispectrum},
booktitle={8th European Signal Processing Conference, 1996. EUSIPCO 1996.},
publisher={IEEE},
address={Trieste, Italy},
year={1996},
month={01},
date={1996-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/sr_5.pdf},
abstract={Recognition of voiced speech phonemes is addressed in this paper using features extracted from the bispectrum of the speech signal. Voiced speech is modeled as a superposition of coupled harmonics, located at frequencies that are multiples of the pitch and modulated by the vocal tract. For this type of signal, nonzero bispectral values are shown to be guaranteed by the estimation procedure employed. The vocal tract frequency response is reconstructed from the bispectrum on a set of frequency points that are multiples of the pitch. An AR model is next fitted on this transfer function. The AR coefficients are used as the feature vector for the subsequent classification step. Any finite dimension vector classifier can be employed at this point. Experiments using the LVQ neural classifier give satisfactory classification scores on real speech data, extracted from the DARPA/TIMIT speech corpus.}
}

1995

(C)
Maria Rangoussi and Anastasios Delopoulos
"Classification of Consonants using Wigner Distribution Features"
12th International Conference on DSP, Limassol, Cyprus, 1995 Jan
[Abstract][BibTex]

@inproceedings{Rangoussi1995Classification,
author={Maria Rangoussi and Anastasios Delopoulos},
title={Classification of Consonants using Wigner Distribution Features},
booktitle={12th International Conference on DSP},
address={Limassol, Cyprus},
year={1995},
month={01},
date={1995-01-01}
}

(C)
Maria Rangoussi and Anastasios Delopoulos
"Recognition of Unvoiced Stops from their Time-Frequency Representation"
1995 International Conference on Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., pp. 792-795, IEEE, Detroit, MI, 1995 May
[Abstract][BibTex][pdf]

The recognition of the unvoiced stop sounds /k/, /p/ and /t/ in a speech signal is an interesting problem, due to the irregular, aperiodic, nonstationary nature of the corresponding signals. Their spotting is much easier, however, thanks to the characteristic silence interval they include. Classification of these three phonemes is proposed, based on the patterns extracted from their time-frequency representation. This is possible because the different articulation points of /k/, /p/ and /t/ are reflected into distinct patterns of evolution of their spectral contents with time. These patterns can be obtained by suitable time-frequency analysis, and then used for classification. The Wigner distribution of the unvoiced stop signals, appropriately smoothed and subsampled, is proposed as the basic classification pattern. Finally, for the classification step, the learning vector quantization (LVQ) classifier of Kohonen (1988) is employed on a set of unvoiced stop signals extracted from the TIMIT speech database, with encouraging results under context- and speaker-independent testing conditions.

@inproceedings{Rangoussi1995Recognition,
author={Maria Rangoussi and Anastasios Delopoulos},
title={Recognition of Unvoiced Stops from their Time-Frequency Representation},
booktitle={1995 International Conference on Acoustics, Speech, and Signal Processing, 1995. ICASSP-95.},
pages={792-795},
publisher={IEEE},
address={Detroit, MI},
year={1995},
month={05},
date={1995-05-09},
url={http://dx.doi.org/10.1109/ICASSP.1995.479813},
doi={http://10.1109/ICASSP.1995.479813},
abstract={The recognition of the unvoiced stop sounds /k/, /p/ and /t/ in a speech signal is an interesting problem, due to the irregular, aperiodic, nonstationary nature of the corresponding signals. Their spotting is much easier, however, thanks to the characteristic silence interval they include. Classification of these three phonemes is proposed, based on the patterns extracted from their time-frequency representation. This is possible because the different articulation points of /k/, /p/ and /t/ are reflected into distinct patterns of evolution of their spectral contents with time. These patterns can be obtained by suitable time-frequency analysis, and then used for classification. The Wigner distribution of the unvoiced stop signals, appropriately smoothed and subsampled, is proposed as the basic classification pattern. Finally, for the classification step, the learning vector quantization (LVQ) classifier of Kohonen (1988) is employed on a set of unvoiced stop signals extracted from the TIMIT speech database, with encouraging results under context- and speaker-independent testing conditions.}
}

1994

(C)
Anastasios Delopoulos and Stefanos Kollias
"Optimal filterbanks for signal reconstruction from noisy subband components"
28th ASILOMAR Conference on Signals, Systems and Computers, Asilomar CA, USA, 1994 Jan
[Abstract][BibTex]

@inproceedings{Delopoulos1994Optimal,
author={Anastasios Delopoulos and Stefanos Kollias},
title={Optimal filterbanks for signal reconstruction from noisy subband components},
booktitle={28th ASILOMAR Conference on Signals, Systems and Computers},
address={Asilomar CA, USA},
year={1994},
month={01},
date={1994-01-01}
}

(C)
Nikolaos G. Panagiotidis, Anastasios Delopoulos and Stefanos D. Kollias
"Neural-Network Based Classification of Laser-Doppler Flowmetry Signals"
1994 IEEE Workshop on Neural Networks for Signal Processing, pp. 709-718, Ermioni, 1994 Sep
[Abstract][BibTex][pdf]

Laser Doppler flowmetry is the most advantageous technique for non-invasive patient monitoring. Based on the Doppler principle, signals corresponding to blood flow are generated, and metrics corresponding to healthy vs. patient samples are extracted. A neural-network based classifier for these metrics is proposed. The signals are initially filtered and transformed into the frequency domain through third-order correlation and bispectrum estimation. The pictorial representation of the correlations is subsequently routed into a neural network based multilayer perceptron classifier, which is described in detail. Finally, experimental results demonstrating the efficiency of the proposed scheme are presented.

@inproceedings{Panagiotidis1994Neural,
author={Nikolaos G. Panagiotidis and Anastasios Delopoulos and Stefanos D. Kollias},
title={Neural-Network Based Classification of Laser-Doppler Flowmetry Signals},
booktitle={1994 IEEE Workshop on Neural Networks for Signal Processing},
pages={709-718},
address={Ermioni},
year={1994},
month={09},
date={1994-09-06},
url={http://mug.ee.auth.gr/wp-content/uploads/publications/00365994.pdf},
doi={http://10.1109/NNSP.1994.365994},
abstract={Laser Doppler flowmetry is the most advantageous technique for non-invasive patient monitoring. Based on the Doppler principle, signals corresponding to blood flow are generated, and metrics corresponding to healthy vs. patient samples are extracted. A neural-network based classifier for these metrics is proposed. The signals are initially filtered and transformed into the frequency domain through third-order correlation and bispectrum estimation. The pictorial representation of the correlations is subsequently routed into a neural network based multilayer perceptron classifier, which is described in detail. Finally, experimental results demonstrating the efficiency of the proposed scheme are presented.}
}

(C)
Andreas Tirakis, Anastasios Delopoulos and Stefanos Kollias
"Invariant Image Recognition Using Triple Correlations and Neural Networks"
IEEE International Conference on Neural Networks, pp. 4055-4060, Orlando FL, USA, 1994 Jun
[Abstract][BibTex]

Triple-correlation-based image representations were previously (Delopoulos, Tirakis, and Kollias, 1994) combined with neural network architectures for deriving an invariant, with respect to translation, rotation and dilation, robust classification scheme. Efficient implementations are described in this paper, which reduce the computational complexity of the method. Hierarchical, multiresolution neural networks are proposed as an effective architecture for achieving this purpose.

@inproceedings{Tirakis1994Invariant,
author={Andreas Tirakis and Anastasios Delopoulos and Stefanos Kollias},
title={Invariant Image Recognition Using Triple Correlations and Neural Networks},
booktitle={IEEE International Conference on Neural Networks},
pages={4055-4060},
address={Orlando FL, USA},
year={1994},
month={06},
date={1994-06-27},
abstract={Triple-correlation-based image representations were previously (Delopoulos, Tirakis, and Kollias, 1994) combined with neural network architectures for deriving an invariant, with respect to translation, rotation and dilation, robust classification scheme. Efficient implementations are described in this paper, which reduce the computational complexity of the method. Hierarchical, multiresolution neural networks are proposed as an effective architecture for achieving this purpose.}
}

1993

(C)
Yannis Avrithis, Anastasios Delopoulos and Stefanos Kollias
"An Efficient Scheme for Invariant Optical Character Recognition Using Triple Correlations"
Proc. of twp-login.phphe Intl. Conf. on DSP and II Intl. Conf. on Comp. Applications to Engineering Systems, Nicosia, Cyprus, 1993 Jan
[Abstract][BibTex][pdf]

The implementation of an efficient scheme for translation, rotation and scale invariant optical character recognition is presented in this paper. An image representation is used, which is based on appropriate clustering and transformation of the image triple-correlation domain. This representation is one-to-one related to the class of all shifted-rotated-scaled versions of the original image, as well as robust to a wide variety of additive noises. Special attention is given to binary images, which are used for Optical Character Recognition, and simulation results illustrate the performance of the proposed implementation.

@inproceedings{Avrithis1993Efficient,
author={Yannis Avrithis and Anastasios Delopoulos and Stefanos Kollias},
title={An Efficient Scheme for Invariant Optical Character Recognition Using Triple Correlations},
booktitle={Proc. of twp-login.phphe Intl. Conf. on DSP and II Intl. Conf. on Comp. Applications to Engineering Systems},
address={Nicosia, Cyprus},
year={1993},
month={01},
date={1993-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/10.1.1.157.7106.pdf},
abstract={The implementation of an efficient scheme for translation, rotation and scale invariant optical character recognition is presented in this paper. An image representation is used, which is based on appropriate clustering and transformation of the image triple-correlation domain. This representation is one-to-one related to the class of all shifted-rotated-scaled versions of the original image, as well as robust to a wide variety of additive noises. Special attention is given to binary images, which are used for Optical Character Recognition, and simulation results illustrate the performance of the proposed implementation.}
}

(C)
Maria Rangoussi, Anastasios Delopoulos and Michail K. Tsatsanis
"On the use of higher-order-statistics for robust endpoint detection of speech"
Proc. of 3rd Intl. Workshop on Higher Order Statistics, Lake Tahoe California, USA, 1993 Jan
[Abstract][BibTex]

Third order statistics of speech signals are not identically zero, as it would be expected based on the linear model for voice. This is due to quadratic harmonic coupling produced in the vocal tract. Based on this observation, third order cumulants are employed to address the endpoint detection problem in low SNR level recordings due to their immunity to (colored) additive non-skewed noise. The proposed method uses the maximum singular value of an appropriately formed cumulant matrix to distinguish between voiced parts of the speech signal, and silence (noise). Adaptive implementations are also proposed, making this method computationally attractive. Results of batch and adaptive forms are presented for real and simulated data.

@inproceedings{Rangoussi1993Higher,
author={Maria Rangoussi and Anastasios Delopoulos and Michail K. Tsatsanis},
title={On the use of higher-order-statistics for robust endpoint detection of speech},
booktitle={Proc. of 3rd Intl. Workshop on Higher Order Statistics},
address={Lake Tahoe California, USA},
year={1993},
month={01},
date={1993-01-01},
abstract={Third order statistics of speech signals are not identically zero, as it would be expected based on the linear model for voice. This is due to quadratic harmonic coupling produced in the vocal tract. Based on this observation, third order cumulants are employed to address the endpoint detection problem in low SNR level recordings due to their immunity to (colored) additive non-skewed noise. The proposed method uses the maximum singular value of an appropriately formed cumulant matrix to distinguish between voiced parts of the speech signal, and silence (noise). Adaptive implementations are also proposed, making this method computationally attractive. Results of batch and adaptive forms are presented for real and simulated data.}
}

(C)
Andreas Tirakis, Anastasios Delopoulos and Stefanos Kollias
"Neural-Network-Based image Classification Using Optimal Multiresolution Analysis"
Proc of Intl. Conference on NN and SP (NNASP'93), 1993 Jan
[Abstract][BibTex]

@inproceedings{Tirakis1993Neural,
author={Andreas Tirakis and Anastasios Delopoulos and Stefanos Kollias},
title={Neural-Network-Based image Classification Using Optimal Multiresolution Analysis},
booktitle={Proc of Intl. Conference on NN and SP (NNASP'93)},
year={1993},
month={01},
date={1993-01-01}
}

1992

(C)
Anastasios Delopoulos, Andreas Tirakis and Stefanos Kollias
"Invariant image recognition using triple correlations"
Proc. of EUSIPCO-92, Brussels, Belgium, 1992 Jan
[Abstract][BibTex]

@inproceedings{Delopoulos1992Invariant,
author={Anastasios Delopoulos and Andreas Tirakis and Stefanos Kollias},
title={Invariant image recognition using triple correlations},
booktitle={Proc. of EUSIPCO-92},
address={Brussels, Belgium},
year={1992},
month={01},
date={1992-01-01}
}

(C)
Andreas Tirakis, Anastasios Delopoulos and Stefanos Kollias
"Cumulant-based neural network classifiers"
Proc. of International Conference on Artificial Neural Networks ICANN92, Brighton, UK, 1992 Jan
[Abstract][BibTex]

@inproceedings{Tirakis1992Cumulant,
author={Andreas Tirakis and Anastasios Delopoulos and Stefanos Kollias},
title={Cumulant-based neural network classifiers},
booktitle={Proc. of International Conference on Artificial Neural Networks ICANN92},
address={Brighton, UK},
year={1992},
month={01},
date={1992-01-01}
}

1991

(C)
Anastasios Delopoulos and Georgios B. Giannakis
"Input Design for Consistent Identification in the Presence of Input/Output Noise"
Proc. of Intl. Signal Processing Workshop on Higher-Order Statistics, Chamrousse, France, 1991 Jan
[Abstract][BibTex]

@inproceedings{Delopoulos1991Input,
author={Anastasios Delopoulos and Georgios B. Giannakis},
title={Input Design for Consistent Identification in the Presence of Input/Output Noise},
booktitle={Proc. of Intl. Signal Processing Workshop on Higher-Order Statistics},
address={Chamrousse, France},
year={1991},
month={01},
date={1991-01-01}
}

(C)
Anastasios Delopoulos and Georgios B. Giannakis
"Strongly consistent output only and input-ouput identification in the presence of Gaussian noise"
Proc. of Intl. Conf. on ASSP,(ICASSP '91), Toronto, Canada, 1991 Jan
[Abstract][BibTex]

@inproceedings{Delopoulos1991Strongly,
author={Anastasios Delopoulos and Georgios B. Giannakis},
title={Strongly consistent output only and input-ouput identification in the presence of Gaussian noise},
booktitle={Proc. of Intl. Conf. on ASSP,(ICASSP '91)},
address={Toronto, Canada},
year={1991},
month={01},
date={1991-01-01}
}

1990

(C)
Anastasios Delopoulos and Georgios B. Giannakis
"Strongly consistent identification algorithms and noise insensitive MSE criteria"
Proc. of 4th Digital Signal Processing Workshop, New Paltz NY, USA, 1990 Jan
[Abstract][BibTex]

@inproceedings{Delopoulos1990Strongly,
author={Anastasios Delopoulos and Georgios B. Giannakis},
title={Strongly consistent identification algorithms and noise insensitive MSE criteria},
booktitle={Proc. of 4th Digital Signal Processing Workshop},
address={New Paltz NY, USA},
year={1990},
month={01},
date={1990-01-01}
}

(C)
Georgios Giannakis and Anastasios Delopoulos
"Nonparametric estimation of autocorrelation and spectra using cumulants and polyspectra"
Proc. of Soc. of Photo-Opt. Instr. Eng., Advanced Signal Processing Alg., and Implem.(SPIE '90), 1990 Nov
[Abstract][BibTex][pdf]

Autocorrelation and specira of linear random processes can be expressed in terms of cumulants and polyspectra respectively. The insensitivity of the latter to additive Gaussian noise of unknown covariance is exploited in this paper to develop spectral estimators of deterministic and linear non-Gaussian signals using polyspectra. In the time-domain windowed projections of third-order cumulants are shown to yield consistent estimators of the autocorrelation sequence. Both batch and recursive algorithms are derived. In the frequency-domain a Fourier-slice solution and a least-squares approach are described for performing spectral analysis through windowed bi-periodograms. Asymptotic variance expressions of the time- and frequencydomain estimators are also presented. Two-dimensional extensions are indicated and potential applications are discussed. Simulations are provided to illustrate the performance of the proposed algorithms and compare them with conventional approaches.

@inproceedings{Giannakis1990,
author={Georgios Giannakis and Anastasios Delopoulos},
title={Nonparametric estimation of autocorrelation and spectra using cumulants and polyspectra},
booktitle={Proc. of Soc. of Photo-Opt. Instr. Eng., Advanced Signal Processing Alg., and Implem.(SPIE '90)},
year={1990},
month={11},
date={1990-11-01},
url={http://dx.doi.org/10.1117/12.23504},
doi={http://10.1117/12.23504},
abstract={Autocorrelation and specira of linear random processes can be expressed in terms of cumulants and polyspectra respectively. The insensitivity of the latter to additive Gaussian noise of unknown covariance is exploited in this paper to develop spectral estimators of deterministic and linear non-Gaussian signals using polyspectra. In the time-domain windowed projections of third-order cumulants are shown to yield consistent estimators of the autocorrelation sequence. Both batch and recursive algorithms are derived. In the frequency-domain a Fourier-slice solution and a least-squares approach are described for performing spectral analysis through windowed bi-periodograms. Asymptotic variance expressions of the time- and frequencydomain estimators are also presented. Two-dimensional extensions are indicated and potential applications are discussed. Simulations are provided to illustrate the performance of the proposed algorithms and compare them with conventional approaches.}
}