Publications

2018

 (J) Janet van den Boer, Annemiek van der Lee, Lingchuan Zhou, Vasileios Papapanagiotou, Christos Diou, Anastasios Delopoulos and Monica Mars The SPLENDID Eating Detection Sensor: Development and Feasibility Study, 6, (9), pp. 170, 2018 Sep [Abstract][BibTex]The available methods for monitoring food intake---which for a great part rely on self-report---often provide biased and incomplete data. Currently, no good technological solutions are available. Hence, the SPLENDID eating detection sensor (an ear-worn device with an air microphone and a photoplethysmogram [PPG] sensor) was developed to enable complete and objective measurements of eating events. The technical performance of this device has been described before. To date, literature is lacking a description of how such a device is perceived and experienced by potential users. Objective: The objective of our study was to explore how potential users perceive and experience the SPLENDID eating detection sensor. Methods: Potential users evaluated the eating detection sensor at different stages of its development: (1) At the start, 12 health professionals (eg, dieticians, personal trainers) were interviewed and a focus group was held with 5 potential end users to find out their thoughts on the concept of the eating detection sensor. (2) Then, preliminary prototypes of the eating detection sensor were tested in a laboratory setting where 23 young adults reported their experiences. (3) Next, the first wearable version of the eating detection sensor was tested in a semicontrolled study where 22 young, overweight adults used the sensor on 2 separate days (from lunch till dinner) and reported their experiences. (4) The final version of the sensor was tested in a 4-week feasibility study by 20 young, overweight adults who reported their experiences. Results: Throughout all the development stages, most individuals were enthusiastic about the eating detection sensor. However, it was stressed multiple times that it was critical that the device be discreet and comfortable to wear for a longer period. In the final study, the eating detection sensor received an average grade of 3.7 for wearer comfort on a scale of 1 to 10. Moreover, experienced discomfort was the main reason for wearing the eating detection sensor <2 hours a day. The participants reported having used the eating detection sensor on 19/28 instructed days on average. Conclusions: The SPLENDID eating detection sensor, which uses an air microphone and a PPG sensor, is a promising new device that can facilitate the collection of reliable food intake data, as shown by its technical potential. Potential users are enthusiastic, but to be successful wearer comfort and discreetness of the device need to be improved.@article{2018Boer,author={Janet van den Boer and Annemiek van der Lee and Lingchuan Zhou and Vasileios Papapanagiotou and Christos Diou and Anastasios Delopoulos and Monica Mars},title={The SPLENDID Eating Detection Sensor: Development and Feasibility Study},journal={The SPLENDID Eating Detection Sensor: Development and Feasibility Study},volume={6},number={9},pages={170},year={2018},month={09},date={2018-09-04},doi={https://doi.org/10.2196/mhealth.9781},issn={2291-5222},abstract={The available methods for monitoring food intake---which for a great part rely on self-report---often provide biased and incomplete data. Currently, no good technological solutions are available. Hence, the SPLENDID eating detection sensor (an ear-worn device with an air microphone and a photoplethysmogram [PPG] sensor) was developed to enable complete and objective measurements of eating events. The technical performance of this device has been described before. To date, literature is lacking a description of how such a device is perceived and experienced by potential users. Objective: The objective of our study was to explore how potential users perceive and experience the SPLENDID eating detection sensor. Methods: Potential users evaluated the eating detection sensor at different stages of its development: (1) At the start, 12 health professionals (eg, dieticians, personal trainers) were interviewed and a focus group was held with 5 potential end users to find out their thoughts on the concept of the eating detection sensor. (2) Then, preliminary prototypes of the eating detection sensor were tested in a laboratory setting where 23 young adults reported their experiences. (3) Next, the first wearable version of the eating detection sensor was tested in a semicontrolled study where 22 young, overweight adults used the sensor on 2 separate days (from lunch till dinner) and reported their experiences. (4) The final version of the sensor was tested in a 4-week feasibility study by 20 young, overweight adults who reported their experiences. Results: Throughout all the development stages, most individuals were enthusiastic about the eating detection sensor. However, it was stressed multiple times that it was critical that the device be discreet and comfortable to wear for a longer period. In the final study, the eating detection sensor received an average grade of 3.7 for wearer comfort on a scale of 1 to 10. Moreover, experienced discomfort was the main reason for wearing the eating detection sensor <2 hours a day. The participants reported having used the eating detection sensor on 19/28 instructed days on average. Conclusions: The SPLENDID eating detection sensor, which uses an air microphone and a PPG sensor, is a promising new device that can facilitate the collection of reliable food intake data, as shown by its technical potential. Potential users are enthusiastic, but to be successful wearer comfort and discreetness of the device need to be improved.}} (J) Christos Diou, Pantelis Lelekas and Anastasios Delopoulos Journal of Imaging, 4, (11), pp. 125, 2018 Oct [Abstract][BibTex]Background: Evidence-based policymaking requires data about the local population’s socioeconomic status (SES) at detailed geographical level, however, such information is often not available, or is too expensive to acquire. Researchers have proposed solutions to estimate SES indicators by analyzing Google Street View images, however, these methods are also resource-intensive, since they require large volumes of manually labeled training data. (2) Methods: We propose a methodology for automatically computing surrogate variables of SES indicators using street images of parked cars and deep multiple instance learning. Our approach does not require any manually created labels, apart from data already available by statistical authorities, while the entire pipeline for image acquisition, parked car detection, car classification, and surrogate variable computation is fully automated. The proposed surrogate variables are then used in linear regression models to estimate the target SES indicators. (3) Results: We implement and evaluate a model based on the proposed surrogate variable at 30 municipalities of varying SES in Greece. Our model has R2=0.76 and a correlation coefficient of 0.874 with the true unemployment rate, while it achieves a mean absolute percentage error of 0.089 and mean absolute error of 1.87 on a held-out test set. Similar results are also obtained for other socioeconomic indicators, related to education level and occupational prestige. (4) Conclusions: The proposed methodology can be used to estimate SES indicators at the local level automatically, using images of parked cars detected via Google Street View, without the need for any manual labeling effort@article{Diou2018JI,author={Christos Diou and Pantelis Lelekas and Anastasios Delopoulos},title={Image-Based Surrogates of Socio-Economic Status in Urban Neighborhoods Using Deep Multiple Instance Learning},journal={Journal of Imaging},volume={4},number={11},pages={125},year={2018},month={10},date={2018-10-23},doi={http://10.3390/jimaging4110125},issn={2313-433X},abstract={Background: Evidence-based policymaking requires data about the local population’s socioeconomic status (SES) at detailed geographical level, however, such information is often not available, or is too expensive to acquire. Researchers have proposed solutions to estimate SES indicators by analyzing Google Street View images, however, these methods are also resource-intensive, since they require large volumes of manually labeled training data. (2) Methods: We propose a methodology for automatically computing surrogate variables of SES indicators using street images of parked cars and deep multiple instance learning. Our approach does not require any manually created labels, apart from data already available by statistical authorities, while the entire pipeline for image acquisition, parked car detection, car classification, and surrogate variable computation is fully automated. The proposed surrogate variables are then used in linear regression models to estimate the target SES indicators. (3) Results: We implement and evaluate a model based on the proposed surrogate variable at 30 municipalities of varying SES in Greece. Our model has R2=0.76 and a correlation coefficient of 0.874 with the true unemployment rate, while it achieves a mean absolute percentage error of 0.089 and mean absolute error of 1.87 on a held-out test set. Similar results are also obtained for other socioeconomic indicators, related to education level and occupational prestige. (4) Conclusions: The proposed methodology can be used to estimate SES indicators at the local level automatically, using images of parked cars detected via Google Street View, without the need for any manual labeling effort}} (J) Maryam Esfandiari, Vasilis Papapanagiotou, Christos Diou, Modjtaba Zandian, Jenny Nolstam, Per Södersten and Cecilia Bergh JoVE, (135), 2018 May [Abstract][BibTex]Subjects eat food from a plate that sits on a scale connected to a computer that records the weight loss of the plate during the meal and makes up a curve of food intake, meal duration and rate of eating modeled by a quadratic equation. The purpose of the method is to change eating behavior by providing visual feedback on the computer screen that the subject can adapt to because her/his own rate of eating appears on the screen during the meal. The data generated by the method is automatically analyzed and fitted to the quadratic equation using a custom made algorithm. The method has the advantage of recording eating behavior objectively and offers the possibility of changing eating behavior both in experiments and in clinical practice. A limitation may be that experimental subjects are affected by the method. The same limitation may be an advantage in clinical practice, as eating behavior is more easily stabilized by the method. A treatment that uses this method has normalized body weight and restored the health of several hundred patients with anorexia nervosa and other eating disorders and has reduced the weight and improved the health of severely overweight patients.@article{Esfandiari2018,author={Maryam Esfandiari and Vasilis Papapanagiotou and Christos Diou and Modjtaba Zandian and Jenny Nolstam and Per Södersten and Cecilia Bergh},title={Control of Eating Behavior Using a Novel Feedback System},journal={JoVE},number={135},year={2018},month={05},date={2018-05-08},doi={http://10.3791/57432},abstract={Subjects eat food from a plate that sits on a scale connected to a computer that records the weight loss of the plate during the meal and makes up a curve of food intake, meal duration and rate of eating modeled by a quadratic equation. The purpose of the method is to change eating behavior by providing visual feedback on the computer screen that the subject can adapt to because her/his own rate of eating appears on the screen during the meal. The data generated by the method is automatically analyzed and fitted to the quadratic equation using a custom made algorithm. The method has the advantage of recording eating behavior objectively and offers the possibility of changing eating behavior both in experiments and in clinical practice. A limitation may be that experimental subjects are affected by the method. The same limitation may be an advantage in clinical practice, as eating behavior is more easily stabilized by the method. A treatment that uses this method has normalized body weight and restored the health of several hundred patients with anorexia nervosa and other eating disorders and has reduced the weight and improved the health of severely overweight patients.}} (J) George Mamalakis, Christos Diou, Andreas Symeonidis and Leonidas Georgiadis Neural Computing and Applications, 2018 Jul [Abstract][BibTex]In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon's file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the {\$}{\$}FI^2DS{\$}{\$}FI2DSfile system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.@article{Mamalakis2018,author={George Mamalakis and Christos Diou and Andreas Symeonidis and Leonidas Georgiadis},title={Of daemons and men: reducing false positive rate in intrusion detection systems with file system footprint analysis},journal={Neural Computing and Applications},year={2018},month={07},date={2018-07-05},doi={http://10.1007/s00521-018-3550-x},issn={1433-3058},abstract={In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon\'s file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the {\\$}{\\$}FI^2DS{\\$}{\\$}FI2DSfile system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.}} (J) Ioannis Sarafis, Christos Diou and Anastasios Delopoulos CoRR, abs/1809.06124, 2018 Sep [Abstract][BibTex][pdf]Weighted SVM (or fuzzy SVM) is the most widely used SVM variant owning its effectiveness to the use of instance weights. Proper selection of the instance weights can lead to increased generalization performance. In this work, we extend the span error bound theory to weighted SVM and we introduce effective hyperparameter selection methods for the weighted SVM algorithm. The significance of the presented work is that enables the application of span bound and span-rule with weighted SVM. The span bound is an upper bound of the leave-one-out error that can be calculated using a single trained SVM model. This is important since leave-one-out error is an almost unbiased estimator of the test error. Similarly, the span-rule gives the actual value of the leave-one-out error. Thus, one can apply span bound and span-rule as computationally lightweight alternatives of leave-one-out procedure for hyperparameter selection. The main theoretical contributions are: (a) we prove the necessary and sufficient condition for the existence of the span of a support vector in weighted SVM; and (b) we prove the extension of span bound and span-rule to weighted SVM. We experimentally evaluate the span bound and the span-rule for hyperparameter selection and we compare them with other methods that are applicable to weighted SVM: the K-fold cross-validation and the $\xi - \alpha$ bound. Experiments on 14 benchmark data sets and data sets with importance scores for the training instances show that: (a) the condition for the existence of span in weighted SVM is satisfied almost always; (b) the span-rule is the most effective method for weighted SVM hyperparameter selection; (c) the span-rule is the best predictor of the test error in the mean square error sense; and (d) the span-rule is efficient and, for certain problems, it can be calculated faster than K-fold cross-validation.@article{Sarafis2018CoRR,author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},title={Span error bound for weighted SVM with applications in hyperparameter selection (preprint)},journal={CoRR},volume={abs/1809.06124},year={2018},month={09},date={2018-09-17},url={https://arxiv.org/pdf/1809.06124.pdf},abstract={Weighted SVM (or fuzzy SVM) is the most widely used SVM variant owning its effectiveness to the use of instance weights. Proper selection of the instance weights can lead to increased generalization performance. In this work, we extend the span error bound theory to weighted SVM and we introduce effective hyperparameter selection methods for the weighted SVM algorithm. The significance of the presented work is that enables the application of span bound and span-rule with weighted SVM. The span bound is an upper bound of the leave-one-out error that can be calculated using a single trained SVM model. This is important since leave-one-out error is an almost unbiased estimator of the test error. Similarly, the span-rule gives the actual value of the leave-one-out error. Thus, one can apply span bound and span-rule as computationally lightweight alternatives of leave-one-out procedure for hyperparameter selection. The main theoretical contributions are: (a) we prove the necessary and sufficient condition for the existence of the span of a support vector in weighted SVM; and (b) we prove the extension of span bound and span-rule to weighted SVM. We experimentally evaluate the span bound and the span-rule for hyperparameter selection and we compare them with other methods that are applicable to weighted SVM: the K-fold cross-validation and the $\\xi - \\alpha$ bound. Experiments on 14 benchmark data sets and data sets with importance scores for the training instances show that: (a) the condition for the existence of span in weighted SVM is satisfied almost always; (b) the span-rule is the most effective method for weighted SVM hyperparameter selection; (c) the span-rule is the best predictor of the test error in the mean square error sense; and (d) the span-rule is efficient and, for certain problems, it can be calculated faster than K-fold cross-validation.}} (J) Vasilis Papapanagiotou, Christos Diou, Ioannis Ioakimidis, Per Sodersten and Anastasios Delopoulos IEEE Journal of Biomedical and Health Informatics, PP, (99), pp. 1-1, 2018 Mar [Abstract][BibTex][pdf]The structure of the cumulative food intake (CFI) curve has been associated with obesity and eating disorders. Scales that record the weight loss of a plate from which a subject eats food are used for capturing this curve; however, their measurements are contaminated by additive noise and are distorted by certain types of artifacts. This paper presents an algorithm for automatically processing continuous in-meal weight measurements in order to extract the clean CFI curve and in-meal eating indicators, such as total food intake and food intake rate. The algorithm relies on the representation of the weight-time series by a string of symbols that correspond to events such as bites or food additions. A context-free grammar is next used to model a meal as a sequence of such events. The selection of the most likely parse tree is finally used to determine the predicted eating sequence. The algorithm is evaluated on a dataset of 113 meals collected using the Mandometer, a scale that continuously samples plate weight during eating. We evaluate the effectiveness for seven indicators, and for bite-instance detection. We compare our approach with three state-of-the-art algorithms, and achieve the lowest error rates for most indicators (24 g for total meal weight). The proposed algorithm extracts the parameters of the CFI curve automatically, eliminating the need for manual data processing, and thus facilitating large-scale studies of eating behavior.@article{Vassilis2018,author={Vasilis Papapanagiotou and Christos Diou and Ioannis Ioakimidis and Per Sodersten and Anastasios Delopoulos},title={Automatic analysis of food intake and meal microstructure based on continuous weight measurements},journal={IEEE Journal of Biomedical and Health Informatics},volume={PP},number={99},pages={1-1},year={2018},month={03},date={2018-03-05},url={http://mug.ee.auth.gr/wp-content/uploads/papapanagiotou2018automated.pdf},doi={http://10.1109/JBHI.2018.2812243},abstract={The structure of the cumulative food intake (CFI) curve has been associated with obesity and eating disorders. Scales that record the weight loss of a plate from which a subject eats food are used for capturing this curve; however, their measurements are contaminated by additive noise and are distorted by certain types of artifacts. This paper presents an algorithm for automatically processing continuous in-meal weight measurements in order to extract the clean CFI curve and in-meal eating indicators, such as total food intake and food intake rate. The algorithm relies on the representation of the weight-time series by a string of symbols that correspond to events such as bites or food additions. A context-free grammar is next used to model a meal as a sequence of such events. The selection of the most likely parse tree is finally used to determine the predicted eating sequence. The algorithm is evaluated on a dataset of 113 meals collected using the Mandometer, a scale that continuously samples plate weight during eating. We evaluate the effectiveness for seven indicators, and for bite-instance detection. We compare our approach with three state-of-the-art algorithms, and achieve the lowest error rates for most indicators (24 g for total meal weight). The proposed algorithm extracts the parameters of the CFI curve automatically, eliminating the need for manual data processing, and thus facilitating large-scale studies of eating behavior.}}

2017

 (J) Billy Langlet, Anna Anvret, Christos Maramis, Ioannis Moulos, Vasileios Papapanagiotou, Christos Diou, Eirini Lekka, Rachel Heimeier, Anastasios Delopoulos and Ioannis Ioakimidis Behaviour & Information Technology, 36, (10), pp. 1005-1013, 2017 May [Abstract][BibTex][pdf]Studying eating behaviours is important in the fields of eating disorders and obesity. However, the current methodologies of quantifying eating behaviour in a real-life setting are lacking, either in reliability (e.g. self-reports) or in scalability. In this descriptive study, we deployed previously evaluated laboratory-based methodologies in a Swedish high school, using the Mandometer®, together with video cameras and a dedicated mobile app in order to record eating behaviours in a sample of 41 students, 16–17 years old. Without disturbing the normal school life, we achieved a 97% data-retention rate, using methods fully accepted by the target population. The overall eating style of the students was similar across genders, with male students eating more than females, during lunches of similar lengths. While both groups took similar number of bites, males took larger bites across the meal. Interestingly, the recorded school lunches were as long as lunches recorded in a laboratory setting, which is characterised by the absence of social interactions and direct access to additional food. In conclusion, a larger scale use of our methods is feasible, but more hypotheses-based studies are needed to fully describe and evaluate the interactions between the school environment and the recorded eating behaviours.@article{Langlet2017,author={Billy Langlet and Anna Anvret and Christos Maramis and Ioannis Moulos and Vasileios Papapanagiotou and Christos Diou and Eirini Lekka and Rachel Heimeier and Anastasios Delopoulos and Ioannis Ioakimidis},title={Objective measures of eating behaviour in a Swedish high school},journal={Behaviour & Information Technology},volume={36},number={10},pages={1005-1013},year={2017},month={05},date={2017-05-06},url={https://doi.org/10.1080/0144929X.2017.1322146},doi={http://10.1080/0144929X.2017.1322146},abstract={Studying eating behaviours is important in the fields of eating disorders and obesity. However, the current methodologies of quantifying eating behaviour in a real-life setting are lacking, either in reliability (e.g. self-reports) or in scalability. In this descriptive study, we deployed previously evaluated laboratory-based methodologies in a Swedish high school, using the Mandometer®, together with video cameras and a dedicated mobile app in order to record eating behaviours in a sample of 41 students, 16–17 years old. Without disturbing the normal school life, we achieved a 97% data-retention rate, using methods fully accepted by the target population. The overall eating style of the students was similar across genders, with male students eating more than females, during lunches of similar lengths. While both groups took similar number of bites, males took larger bites across the meal. Interestingly, the recorded school lunches were as long as lunches recorded in a laboratory setting, which is characterised by the absence of social interactions and direct access to additional food. In conclusion, a larger scale use of our methods is feasible, but more hypotheses-based studies are needed to fully describe and evaluate the interactions between the school environment and the recorded eating behaviours.}}

2015

 (J) Ioannis Sarafis, Christos Diou and Anastasios Delopoulos "Building effective SVM concept detectors from clickthrough data for large-scale image retrieval" International Journal of Multimedia Information Retrieval, 4, (2), pp. 129-142, 2015 Jun [Abstract][BibTex][pdf]Clickthrough data is a source of information that can be used for automatically building concept detectors for image retrieval. Previous studies, however, have shown that in many cases the resulting training sets suffer from severe label noise that has a significant impact in the SVM concept detector performance. This paper evaluates and proposes a set of strategies for automatically building effective concept detectors from clickthrough data. These strategies focus on: (1) automatic training set generation; (2) assignment of label confidence weights to the training samples and (3) using these weights at the classifier level to improve concept detector effectiveness. For training set selection and in order to assign weights to individual training samples three Information Retrieval (IR) models are examined: vector space models, BM25 and language models. Three SVM variants that take into account importance at the classifier level are evaluated and compared to the standard SVM: the Fuzzy SVM, the Power SVM, and the Bilateral-weighted Fuzzy SVM. Experiments conducted on the MM Grand Challenge dataset (consisting of 1M images and 82.3M unique clicks) for 40 concepts demonstrate that (1) on average, all weighted SVM variants are more effective than the standard SVM; (2) the vector space model produces the best training sets and best weights; (3) the Bilateral-weighted Fuzzy SVM produces the best results but is very sensitive to weight assignment and (4) the Fuzzy SVM is the most robust training approach for varying levels of label noise.@article{Sarafis2015Building,author={Ioannis Sarafis and Christos Diou and Anastasios Delopoulos},title={Building effective SVM concept detectors from clickthrough data for large-scale image retrieval},journal={International Journal of Multimedia Information Retrieval},volume={4},number={2},pages={129-142},year={2015},month={06},date={2015-06-01},url={http://link.springer.com/article/10.1007/s13735-015-0080-5},doi={http://10.1007/s13735-015-0080-5},abstract={Clickthrough data is a source of information that can be used for automatically building concept detectors for image retrieval. Previous studies, however, have shown that in many cases the resulting training sets suffer from severe label noise that has a significant impact in the SVM concept detector performance. This paper evaluates and proposes a set of strategies for automatically building effective concept detectors from clickthrough data. These strategies focus on: (1) automatic training set generation; (2) assignment of label confidence weights to the training samples and (3) using these weights at the classifier level to improve concept detector effectiveness. For training set selection and in order to assign weights to individual training samples three Information Retrieval (IR) models are examined: vector space models, BM25 and language models. Three SVM variants that take into account importance at the classifier level are evaluated and compared to the standard SVM: the Fuzzy SVM, the Power SVM, and the Bilateral-weighted Fuzzy SVM. Experiments conducted on the MM Grand Challenge dataset (consisting of 1M images and 82.3M unique clicks) for 40 concepts demonstrate that (1) on average, all weighted SVM variants are more effective than the standard SVM; (2) the vector space model produces the best training sets and best weights; (3) the Bilateral-weighted Fuzzy SVM produces the best results but is very sensitive to weight assignment and (4) the Fuzzy SVM is the most robust training approach for varying levels of label noise.}}

2013

 (J) Nikolaos Dimitriou and Anastasios Delopoulos "Motion-based segmentation of objects using overlapping temporal windows" Image and Vision Computing, 31, (9), pp. 593-602, 2013 Sep [Abstract][BibTex][pdf]Motion segmentation refers to the problem of separating the objects in a video sequence according to their motion. It is a fundamental problem of computer vision, since various systems focusing on the analysis of dynamic scenes include motion segmentation algorithms. In this paper we present a novel approach, where a video shot is temporally divided in successive and overlapping windows and motion segmentation is performed on each window respectively. This attribute renders the algorithm suitable even for long video sequences. In the last stage of the algorithm the segmentation results for every window are aggregated into a final segmentation. The presented algorithm can handle effectively asynchronous trajectories on each window even when they have no temporal intersection. The evaluation of the proposed algorithm on the Berkeley motion segmentation benchmark demonstrates its scalability and accuracy compared to the state of the art.@article{Dimitriou2013Motion,author={Nikolaos Dimitriou and Anastasios Delopoulos},title={Motion-based segmentation of objects using overlapping temporal windows},journal={Image and Vision Computing},volume={31},number={9},pages={593-602},year={2013},month={09},date={2013-09-01},url={http://www.sciencedirect.com/science/article/pii/S0262885613000929},doi={http://10.1016/j.imavis.2013.06.005},abstract={Motion segmentation refers to the problem of separating the objects in a video sequence according to their motion. It is a fundamental problem of computer vision, since various systems focusing on the analysis of dynamic scenes include motion segmentation algorithms. In this paper we present a novel approach, where a video shot is temporally divided in successive and overlapping windows and motion segmentation is performed on each window respectively. This attribute renders the algorithm suitable even for long video sequences. In the last stage of the algorithm the segmentation results for every window are aggregated into a final segmentation. The presented algorithm can handle effectively asynchronous trajectories on each window even when they have no temporal intersection. The evaluation of the proposed algorithm on the Berkeley motion segmentation benchmark demonstrates its scalability and accuracy compared to the state of the art.}} (J) Christos Maramis, Manolis Falelakis, Irini Lekka, Christos Diou, Pericles Mitkas and Anastasios Delopoulos "Applying semantic technologies in cervical cancer research" Data & Knowledge Engineering, 86, pp. 160-178, 2013 Jul [Abstract][BibTex][pdf]In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named \\{ASSIST\\ and developed as an \\{EU\\ research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case–control association studies, which is the ultimate goal of the system.@article{Maramis2013Applying,author={Christos Maramis and Manolis Falelakis and Irini Lekka and Christos Diou and Pericles Mitkas and Anastasios Delopoulos},title={Applying semantic technologies in cervical cancer research},journal={Data & Knowledge Engineering},volume={86},pages={160-178},year={2013},month={07},date={2013-07-01},url={http://www.sciencedirect.com/science/article/pii/S0169023X13000220},doi={http://10.1016/j.datak.2013.02.003},abstract={In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named \\\\{ASSIST\\\\ and developed as an \\\\{EU\\\\ research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case–control association studies, which is the ultimate goal of the system.}}

2011

 (J) Christos Maramis, Anastasios Delopoulos and Alexandros Lambropoulos "A Computerized Methodology for Improved Virus Typing by PCR-RFLP Gel Electrophoresis" IEEE Transactions on Biomedical Engineering, 58, (8), pp. 2339-2351, 2011 Aug [Abstract][BibTex][pdf]The analysis of digitized images from polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) gel electrophoresis examinations is a popular method for virus typing, i.e., for identifying the virus type(s) that have infected an investigated biological sample. However, being mostly manual, the conventional virus typing protocol remains laborious, time consuming, and error prone. In order to overcome these shortcomings, we propose a computerized methodology for improving virus typing via PCR-RFLP gel electrophoresis. A novel realistic observation model of the viral DNA motion on the gel matrix is employed to assist in exploiting additional virus-related information in comparison to the conventional approaches. The extracted rich information is fed to a novel typing algorithm, resulting in faster and more accurate decisions. The proposed methodology is evaluated for the case of the human papillomavirus typing on a dataset of 80 real and 1500 simulated samples, producing very satisfactory results.@article{Maramis2011Computerized,author={Christos Maramis and Anastasios Delopoulos and Alexandros Lambropoulos},title={A Computerized Methodology for Improved Virus Typing by PCR-RFLP Gel Electrophoresis},journal={IEEE Transactions on Biomedical Engineering},volume={58},number={8},pages={2339-2351},year={2011},month={08},date={2011-08-01},url={http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5765663},doi={http://10.1109/TBME.2011.2153202},abstract={The analysis of digitized images from polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) gel electrophoresis examinations is a popular method for virus typing, i.e., for identifying the virus type(s) that have infected an investigated biological sample. However, being mostly manual, the conventional virus typing protocol remains laborious, time consuming, and error prone. In order to overcome these shortcomings, we propose a computerized methodology for improving virus typing via PCR-RFLP gel electrophoresis. A novel realistic observation model of the viral DNA motion on the gel matrix is employed to assist in exploiting additional virus-related information in comparison to the conventional approaches. The extracted rich information is fed to a novel typing algorithm, resulting in faster and more accurate decisions. The proposed methodology is evaluated for the case of the human papillomavirus typing on a dataset of 80 real and 1500 simulated samples, producing very satisfactory results.}} (J) Christos Maramis and Anastasios Delopoulos "A Novel Algorithm for Restricting the Complexity of Virus Typing via PCR-RFLP Gel Electrophoresis" Biomedical Engineering Letters, 1, (4), pp. 239-246, 2011 Nov [Abstract][BibTex][pdf]PCR-RFLP gel electrophoresis is a popular method for virus typing (i.e., for identifying the types of a virus that have infected a biological sample), which has been automated recently owing to a computerized typing methodology. However, even with the help of this methodology, the PCRRFLP method suffers from low throughput, when compared to other typing methods. In this paper, we tackle this issue by introducing a novel algorithm for conducting the most computationally demanding phase of the aforementioned typing methodology (testing phase).@article{Maramis2011Novel,author={Christos Maramis and Anastasios Delopoulos},title={A Novel Algorithm for Restricting the Complexity of Virus Typing via PCR-RFLP Gel Electrophoresis},journal={Biomedical Engineering Letters},volume={1},number={4},pages={239-246},year={2011},month={11},date={2011-11-20},url={http://dx.doi.org/10.1007/s13534-011-0038-3},doi={http://10.1007/s13534-011-0038-3},abstract={PCR-RFLP gel electrophoresis is a popular method for virus typing (i.e., for identifying the types of a virus that have infected a biological sample), which has been automated recently owing to a computerized typing methodology. However, even with the help of this methodology, the PCRRFLP method suffers from low throughput, when compared to other typing methods. In this paper, we tackle this issue by introducing a novel algorithm for conducting the most computationally demanding phase of the aforementioned typing methodology (testing phase).}}

2010

 (J) Christos Diou, George Stephanopoulos, Panagiotis Panagiotopoulos, Christos Papachristou, Nikos Dimitriou and Anastasios Delopoulos "Large-Scale Concept Detection in Multimedia Data Using Small Training Sets and Cross-Domain Concept Fusion" IEEE Transactions on Circuits and Systems for Video Technology, 20, (12), pp. 1808 - 1821, 2010 Oct [Abstract][BibTex][pdf]This paper presents the concept detector module developed for the VITALAS multimedia retrieval system. It outlines its architecture and major implementation aspects, including a set of procedures and tools that were used for the development of detectors for more than 500 concepts. The focus is on aspects that increase the system\'s scalability in terms of the number of concepts: collaborative concept definition and disambiguation, selection of small but sufficient training sets and efficient manual annotation. The proposed architecture uses cross-domain concept fusion to improve effectiveness and reduce the number of samples required for concept detector training. Two criteria are proposed for selecting the best predictors to use for fusion and their effectiveness is experimentally evaluated for 221 concepts on the TRECVID-2005 development set and 132 concepts on a set of images provided by the Belga news agency. In these experiments, cross-domain concept fusion performed better than early fusion for most concepts. Experiments with variable training set sizes also indicate that cross-domain concept fusion is more effective than early fusion when the training set size is small.@article{Diou2011Large,author={Christos Diou and George Stephanopoulos and Panagiotis Panagiotopoulos and Christos Papachristou and Nikos Dimitriou and Anastasios Delopoulos},title={Large-Scale Concept Detection in Multimedia Data Using Small Training Sets and Cross-Domain Concept Fusion},journal={IEEE Transactions on Circuits and Systems for Video Technology},volume={20},number={12},pages={1808 - 1821},year={2010},month={10},date={2010-10-18},url={http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5604666},doi={http://10.1109/TCSVT.2010.2087814},abstract={This paper presents the concept detector module developed for the VITALAS multimedia retrieval system. It outlines its architecture and major implementation aspects, including a set of procedures and tools that were used for the development of detectors for more than 500 concepts. The focus is on aspects that increase the system\\'s scalability in terms of the number of concepts: collaborative concept definition and disambiguation, selection of small but sufficient training sets and efficient manual annotation. The proposed architecture uses cross-domain concept fusion to improve effectiveness and reduce the number of samples required for concept detector training. Two criteria are proposed for selecting the best predictors to use for fusion and their effectiveness is experimentally evaluated for 221 concepts on the TRECVID-2005 development set and 132 concepts on a set of images provided by the Belga news agency. In these experiments, cross-domain concept fusion performed better than early fusion for most concepts. Experiments with variable training set sizes also indicate that cross-domain concept fusion is more effective than early fusion when the training set size is small.}} (J) Theodora Tsikrika, Christos Diou, Arjen P. de Vries and Anastasios Delopoulos "Reliability and effectiveness of clickthrough data for automatic image annotation" Multimedia Tools and Applications, 55, (1), pp. 27-52, 2010 Aug [Abstract][BibTex][pdf]Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manual annotation effort. We investigate and evaluate this approach using a collection of 97,628 photographic images. The results indicate that the contribution of search log based training data is positive despite their inherent noise; in particular, the combination of manual and automatically generated training data outperforms the use of manual data alone. It is therefore possible to use clickthrough data to perform large-scale image annotation with little manual annotation effort or, depending on performance, using only the automatically generated training data. An extensive presentation of the experimental results and the accompanying data can be accessed at http://olympus.ee.auth.gr/{\\textasciitildediou/civr2009/.@article{Tsikrika2011Reliability,author={Theodora Tsikrika and Christos Diou and Arjen P. de Vries and Anastasios Delopoulos},title={Reliability and effectiveness of clickthrough data for automatic image annotation},journal={Multimedia Tools and Applications},volume={55},number={1},pages={27-52},year={2010},month={08},date={2010-08-17},url={http://dx.doi.org/10.1007/s11042-010-0584-1},doi={http://10.1007/s11042-010-0584-1},abstract={Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manual annotation effort. We investigate and evaluate this approach using a collection of 97,628 photographic images. The results indicate that the contribution of search log based training data is positive despite their inherent noise; in particular, the combination of manual and automatically generated training data outperforms the use of manual data alone. It is therefore possible to use clickthrough data to perform large-scale image annotation with little manual annotation effort or, depending on performance, using only the automatically generated training data. An extensive presentation of the experimental results and the accompanying data can be accessed at http://olympus.ee.auth.gr/{\\\\textasciitildediou/civr2009/.}}

2009

 (J) Theodoros Agorastos, Vassilis Koutkias, Manolis Falelakis, Irini Lekka, Themistoklis Mikos, Anastasios Delopoulos, Pericles Mitkas, Antonios Tantsis, Steven Weyers, Pascal Coorevits, Andreas Kaufmann, Roberto Kurzeja and Nicos Maglaveras "Semantic Integration of Cervical Cancer Data Repositories to Facilitate Multicenter Association Studies: The ASSIST Approach" Cancer Informatics, 8, pp. 31-31, 2009 Jan [Abstract][BibTex][pdf]The current work addresses the unification of Electronic Health Records related to cervical cancer into a single medical knowledge source, in the context of the EU-funded ASSIST research project. The project aims to facilitate the research for cervical precancer and cancer through a system that virtually unifies multiple patient record repositories, physically located in different medical centers/hospitals, thus, increasing flexibility by allowing the formation of study groups \"on demand\" and by recycling patient records in new studies. To this end, ASSIST uses semantic technologies to translate all medical entities (such as patient examination results, history, habits, genetic profile) and represent them in a common form, encoded in the ASSIST Cervical Cancer Ontology. The current paper presents the knowledge elicitation approach followed, towards the definition and representation of the disease\'s medical concepts and rules that constitute the basis for the ASSIST Cervical Cancer Ontology. The proposed approach constitutes a paradigm for semantic integration of heterogeneous clinical data that may be applicable to other biomedical application domains.@article{Agorastos2009Semantic,author={Theodoros Agorastos and Vassilis Koutkias and Manolis Falelakis and Irini Lekka and Themistoklis Mikos and Anastasios Delopoulos and Pericles Mitkas and Antonios Tantsis and Steven Weyers and Pascal Coorevits and Andreas Kaufmann and Roberto Kurzeja and Nicos Maglaveras},title={Semantic Integration of Cervical Cancer Data Repositories to Facilitate Multicenter Association Studies: The ASSIST Approach},journal={Cancer Informatics},volume={8},pages={31-31},year={2009},month={01},date={2009-01-01},url={http://search.proquest.com/docview/1038326414?accountid=8359},abstract={The current work addresses the unification of Electronic Health Records related to cervical cancer into a single medical knowledge source, in the context of the EU-funded ASSIST research project. The project aims to facilitate the research for cervical precancer and cancer through a system that virtually unifies multiple patient record repositories, physically located in different medical centers/hospitals, thus, increasing flexibility by allowing the formation of study groups \\"on demand\\" and by recycling patient records in new studies. To this end, ASSIST uses semantic technologies to translate all medical entities (such as patient examination results, history, habits, genetic profile) and represent them in a common form, encoded in the ASSIST Cervical Cancer Ontology. The current paper presents the knowledge elicitation approach followed, towards the definition and representation of the disease\\'s medical concepts and rules that constitute the basis for the ASSIST Cervical Cancer Ontology. The proposed approach constitutes a paradigm for semantic integration of heterogeneous clinical data that may be applicable to other biomedical application domains.}}

2008

 (J) Manolis Falelakis, Christos Diou and Anastasios Delopoulos "Complexity control in semantic identification" International Journal of Intelligent Systems Technologies and Applications, 1, (3/4), pp. 247-262, 2008 Jan [Abstract][BibTex][pdf]This work introduces an efficient scheme for identifying semantic entities within multimedia data sets, providing mechanisms for modelling the trade-off between the accuracy of the result and the entailed computational cost. Semantic entities are described through formal definitions based on lower-level semantic and/or syntactic features. Based on appropriate metrics, the paper presents a methodology for selecting optimal subsets of syntactic features to extract, so that satisfactory results are obtained, while complexity remains below some required limit.@article{Falelakis2006Complexity,author={Manolis Falelakis and Christos Diou and Anastasios Delopoulos},title={Complexity control in semantic identification},journal={International Journal of Intelligent Systems Technologies and Applications},volume={1},number={3/4},pages={247-262},year={2008},month={01},date={2008-01-04},url={http://dx.doi.org/10.1504/IJISTA.2006.009907},doi={http://10.1504/IJISTA.2006.009907},abstract={This work introduces an efficient scheme for identifying semantic entities within multimedia data sets, providing mechanisms for modelling the trade-off between the accuracy of the result and the entailed computational cost. Semantic entities are described through formal definitions based on lower-level semantic and/or syntactic features. Based on appropriate metrics, the paper presents a methodology for selecting optimal subsets of syntactic features to extract, so that satisfactory results are obtained, while complexity remains below some required limit.}}

2007

 (J) Anatasios Delopoulos, Levon Sukissian and Stefanos Kollias "An efficient multiresolution texture classification scheme using neural networks" International Journal of Computer Mathematics, 67, (1-2), pp. 155-168, 2007 Mar [Abstract][BibTex][pdf]An efficient multiresolution texture classification method is proposed in this paper, based on 2-D linear prediction, multiresolution decomposition and artificial neural networks. A multiresolution spectral analysis of textured images is first developed, which permits 2-D AR texture modelling to be performed in multiple resolutions. Recursive estimation algorithms combined witth the Itakura distance measure provide sets of AR model parameters representing different textures at various resolutions. Appropriate neural network banks are constructed and trained being then able to effectively perform classification of textures irrespective of their resolution level. Results are presented using real textured images which illustrate the good performance of the proposed approach.@article{Delopoulos2007Efficient,author={Anatasios Delopoulos and Levon Sukissian and Stefanos Kollias},title={An efficient multiresolution texture classification scheme using neural networks},journal={International Journal of Computer Mathematics},volume={67},number={1-2},pages={155-168},year={2007},month={03},date={2007-03-20},url={http://mug.ee.auth.gr/wp-content/uploads/00207169808804657.pdf},doi={http://10.1080/00207169808804657},abstract={An efficient multiresolution texture classification method is proposed in this paper, based on 2-D linear prediction, multiresolution decomposition and artificial neural networks. A multiresolution spectral analysis of textured images is first developed, which permits 2-D AR texture modelling to be performed in multiple resolutions. Recursive estimation algorithms combined witth the Itakura distance measure provide sets of AR model parameters representing different textures at various resolutions. Appropriate neural network banks are constructed and trained being then able to effectively perform classification of textures irrespective of their resolution level. Results are presented using real textured images which illustrate the good performance of the proposed approach.}}

2006

 (J) Manolis Falelakis, Christos Diou and Anastasios Delopoulos "Semantic identification: balancing between complexity and validity" EURASIP Journal on Applied Signal Processing, pp. 183-183, 2006 Jan [Abstract][BibTex][pdf]An efficient scheme for identifying semantic entities within data sets such as multimedia documents, scenes, signals, and so forth, is proposed in this work. Expression of semantic entities in terms of syntactic properties is modelled with appropriately defined finite automata, which also model the identification procedure. Based on the structure and properties of these automata, formal definitions of attained validity and certainty and also required complexity are defined as metrics of identification efficiency. The main contribution of the paper relies on organizing the identification and search procedure in a way that maximizes its validity for bounded complexity budgets and reversely minimizes computational complexity for a given required validity threshold. The associated optimization problem is solved by using dynamic programming. Finally, a set of experiments provides insight to the introduced theoretical framework.@article{Falelakis2006Semantic,author={Manolis Falelakis and Christos Diou and Anastasios Delopoulos},title={Semantic identification: balancing between complexity and validity},journal={EURASIP Journal on Applied Signal Processing},pages={183-183},year={2006},month={01},date={2006-01-01},url={http://dx.doi.org/10.1155/ASP/2006/41716},doi={http://10.1155/ASP/2006/41716},abstract={An efficient scheme for identifying semantic entities within data sets such as multimedia documents, scenes, signals, and so forth, is proposed in this work. Expression of semantic entities in terms of syntactic properties is modelled with appropriately defined finite automata, which also model the identification procedure. Based on the structure and properties of these automata, formal definitions of attained validity and certainty and also required complexity are defined as metrics of identification efficiency. The main contribution of the paper relies on organizing the identification and search procedure in a way that maximizes its validity for bounded complexity budgets and reversely minimizes computational complexity for a given required validity threshold. The associated optimization problem is solved by using dynamic programming. Finally, a set of experiments provides insight to the introduced theoretical framework.}} (J) M. Wallace, T. Athanasiadis, Y. Avrithis, A. N. Delopoulos and S. Kollias "Integrating multimedia archives: the architecture and the content layer" IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 36, (1), pp. 34-52, 2006 Jan [Abstract][BibTex][pdf]In the last few years, numerous multimedia archives have made extensive use of digitized storage and annotation technologies. Still, the development of single points of access, providing common and uniform access to their data, despite the efforts and accomplishments of standardization organizations, has remained an open issue as it involves the integration of various large-scale heterogeneous and heterolingual systems. This paper describes a mediator system that achieves architectural integration through an extended three-tier architecture and content integration through semantic modeling. The described system has successfully integrated five multimedia archives, quite different in nature and content from each other, while also providing easy and scalable inclusion of more archives in the future.@article{Wallace2006Integrating,author={M. Wallace and T. Athanasiadis and Y. Avrithis and A. N. Delopoulos and S. Kollias},title={Integrating multimedia archives: the architecture and the content layer},journal={IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans},volume={36},number={1},pages={34-52},year={2006},month={01},date={2006-01-01},url={http://dx.doi.org/10.1109/TSMCA.2005.859184},doi={http://10.1109/TSMCA.2005.859184},abstract={In the last few years, numerous multimedia archives have made extensive use of digitized storage and annotation technologies. Still, the development of single points of access, providing common and uniform access to their data, despite the efforts and accomplishments of standardization organizations, has remained an open issue as it involves the integration of various large-scale heterogeneous and heterolingual systems. This paper describes a mediator system that achieves architectural integration through an extended three-tier architecture and content integration through semantic modeling. The described system has successfully integrated five multimedia archives, quite different in nature and content from each other, while also providing easy and scalable inclusion of more archives in the future.}}

2003

 (J) C. Diou, Karwatka and Jacek "Some methods of identification high clutter regions in radar tracking system" Postepy Radiotechniki, 48, (147), pp. 3-15, 2003 Jan [Abstract][BibTex]@article{Diou2003Some,author={C. Diou and Karwatka and Jacek},title={Some methods of identification high clutter regions in radar tracking system},journal={Postepy Radiotechniki},volume={48},number={147},pages={3-15},year={2003},month={01},date={2003-01-01}}

2000

 (J) Yiannis Xirouhakis and Anastasios Delopoulos "A Comparative Study on 3D Motion Estimation under Orthography" Nordic signal processing symposium, 2000 Jun [Abstract][BibTex][pdf]In the present work, the algorithm proposed in [8,10] is tested against existing approaches on 3D motion and structure estimation of rigid objects under orthography. The theoretical relation between the proposed approach and the well-known factorization and epipolar methods is discussed. At the same time, comparative simulated experiments are given, illustrating the performance of the three algorithms (the factorization, the epipolar and the proposed one). The proposed algorithm seems to be more genericthan the existing approaches, and provides superior estimates of 3D motion in most cases.@article{Xirouhakis2000Comparative,author={Yiannis Xirouhakis and Anastasios Delopoulos},title={A Comparative Study on 3D Motion Estimation under Orthography},journal={Nordic signal processing symposium},year={2000},month={06},date={2000-06-01},url={http://mug.ee.auth.gr/wp-content/uploads/publications/page057_id151.pdf},abstract={In the present work, the algorithm proposed in [8,10] is tested against existing approaches on 3D motion and structure estimation of rigid objects under orthography. The theoretical relation between the proposed approach and the well-known factorization and epipolar methods is discussed. At the same time, comparative simulated experiments are given, illustrating the performance of the three algorithms (the factorization, the epipolar and the proposed one). The proposed algorithm seems to be more genericthan the existing approaches, and provides superior estimates of 3D motion in most cases.}} (J) Yiannis Xirouhakis and Anastasios Delopoulos "Least Squares Estimation of 3D Shape and Motion of Rigid Objects from their Orthographic Projections" IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, (4), pp. 393-399, 2000 Apr [Abstract][BibTex][pdf]The extraction of motion and shape information of three-dimensional objects from their two-dimensional projections is a task that emerges in various applications such as computer vision, biomedical engineering, and video coding and mining especially after the recent guidelines of the Motion Pictures Expert Group regarding MPEG-4 and MPEG-7 standards. Present work establishes a novel approach for extracting the motion and shape parameters of a rigid three-dimensional object on the basis of its orthographic projections and the associated motion field. Experimental results have been included to verify the theoretical analysis.@article{Xirouhakis2000Least,author={Yiannis Xirouhakis and Anastasios Delopoulos},title={Least Squares Estimation of 3D Shape and Motion of Rigid Objects from their Orthographic Projections},journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},volume={22},number={4},pages={393-399},year={2000},month={04},date={2000-04-01},url={http://dx.doi.org/10.1109/34.845382},doi={http://10.1109/34.845382},abstract={The extraction of motion and shape information of three-dimensional objects from their two-dimensional projections is a task that emerges in various applications such as computer vision, biomedical engineering, and video coding and mining especially after the recent guidelines of the Motion Pictures Expert Group regarding MPEG-4 and MPEG-7 standards. Present work establishes a novel approach for extracting the motion and shape parameters of a rigid three-dimensional object on the basis of its orthographic projections and the associated motion field. Experimental results have been included to verify the theoretical analysis.}}

1999

 (J) Sotirios Pavlopoulos and Anastasios Delopoulos "Designing and implementing the transition to a fully digital hospital" IEEE Transactions on Information Technology in Biomedicine, 3, (1), pp. 6-19, 1999 Mar [Abstract][BibTex][pdf]The increase in the number of examinations performed in modern healthcare institutions in conjunction with the range of imaging modalities available today have resulted in a tremendous increase in the number of medical images generated and has made the need for a dedicated system able to acquire, distribute, and store medical image data very attractive. Within the framework of the Hellenic R&D program, we have designed and implemented a picture archiving and communication system for a high-tech cardiosurgery hospital in Greece. The system is able to handle in a digital form images produced from ultrasound, X-ray angiography, ?-camera, chest X-rays, as well as electrocardiogram signals. Based on the adoption of an open architecture highly relying on the DICOM standard, the system enables the smooth transition from the existing procedures to a fully digital operation mode and the integration of all existing medical equipment to the new central archiving system.@article{Pavlopoulos1999Designing,author={Sotirios Pavlopoulos and Anastasios Delopoulos},title={Designing and implementing the transition to a fully digital hospital},journal={IEEE Transactions on Information Technology in Biomedicine},volume={3},number={1},pages={6-19},year={1999},month={03},date={1999-03-01},url={http://dx.doi.org/10.1109/4233.748971},doi={http://10.1109/4233.748971},abstract={The increase in the number of examinations performed in modern healthcare institutions in conjunction with the range of imaging modalities available today have resulted in a tremendous increase in the number of medical images generated and has made the need for a dedicated system able to acquire, distribute, and store medical image data very attractive. Within the framework of the Hellenic R&D program, we have designed and implemented a picture archiving and communication system for a high-tech cardiosurgery hospital in Greece. The system is able to handle in a digital form images produced from ultrasound, X-ray angiography, ?-camera, chest X-rays, as well as electrocardiogram signals. Based on the adoption of an open architecture highly relying on the DICOM standard, the system enables the smooth transition from the existing procedures to a fully digital operation mode and the integration of all existing medical equipment to the new central archiving system.}}

1998

 (J) Stefanos Kollias and Anastasios Delopoulos "Multiresolution Techniques and their Applications to Image Recognition" Expert Systems Techniques and Applications, 1998 Jan [Abstract][BibTex]@article{Kollias1998Multiresolution,author={Stefanos Kollias and Anastasios Delopoulos},title={Multiresolution Techniques and their Applications to Image Recognition},journal={Expert Systems Techniques and Applications},year={1998},month={01},date={1998-01-01}}

1995

 (J) Georgios B. Giannakis and Anastasios Delopoulos "Cumulant based autocorrelation estimates of non-Gaussian linear processes" Signal Processing, 47, (1), pp. 1-17, 1995 Nov [Abstract][BibTex][pdf]Autocorrelation of linear random processes can be expressed in terms of their cumulants. Theoretical insensitivity of the latter to additive Gaussian noise of unknown covariance, is exploited in this paper to develop (within a scale) autocorrelation estimators of linear non-Gaussian time series using cumulants of order higher than two. Windowed projections of third-order cumulants are shown to yield strongly consistent estimators of the autocorrelation sequence. Both batch and recursive algorithms are derived. Asymptotic variance expressions of the proposed estimators are also presented. Simulations are provided to illustrate the performance of the proposed algorithms and compare them with conventional approaches.@article{Giannakis2000Cumulant,author={Georgios B. Giannakis and Anastasios Delopoulos},title={Cumulant based autocorrelation estimates of non-Gaussian linear processes},journal={Signal Processing},volume={47},number={1},pages={1-17},year={1995},month={11},date={1995-11-01},url={http://dx.doi.org/10.1016/0165-1684(95)00095-X},doi={http://10.1016/0165-1684(95)00095-X},abstract={Autocorrelation of linear random processes can be expressed in terms of their cumulants. Theoretical insensitivity of the latter to additive Gaussian noise of unknown covariance, is exploited in this paper to develop (within a scale) autocorrelation estimators of linear non-Gaussian time series using cumulants of order higher than two. Windowed projections of third-order cumulants are shown to yield strongly consistent estimators of the autocorrelation sequence. Both batch and recursive algorithms are derived. Asymptotic variance expressions of the proposed estimators are also presented. Simulations are provided to illustrate the performance of the proposed algorithms and compare them with conventional approaches.}} (J) Andreas Tirakis, Anastasios Delopoulos and Stefanos Kollias "2-D Filter Bank Design for Optimal Reconstruction using Limited Subband Information" IEEE Transanctions on Image Processing, 4, (8), pp. 1160-1165, 1995 Aug [Abstract][BibTex][pdf]In this correspondence, we propose design techniques for analysis and synthesis filters of 2-D perfect reconstruction filter banks (PRFB\'s) that perform optimal reconstruction when a reduced number of subband signals is used. Based on the minimization of the squared error between the original signal and some low-resolution representation of it, the 2-D filters are optimally adjusted to the statistics of the input images so that most of the signal\'s energy is concentrated in the first few subband components. This property makes the optimal PRFB\'s efficient for image compression and pattern representations at lower resolutions for classification purposes. By extending recently introduced ideas from frequency domain principal component analysis to two dimensions, we present results for general 2-D discrete nonstationary and stationary second-order processes, showing that the optimal filters are nonseparable. Particular attention is paid to separable random fields, proving that only the first and last filters of the optimal PRFB are separable in this case. Simulation results that illustrate the theoretical achievements are presented.@article{Tirakis1995Filter,author={Andreas Tirakis and Anastasios Delopoulos and Stefanos Kollias},title={2-D Filter Bank Design for Optimal Reconstruction using Limited Subband Information},journal={IEEE Transanctions on Image Processing},volume={4},number={8},pages={1160-1165},year={1995},month={08},date={1995-08-01},url={http://dx.doi.org/10.1109/83.403423},doi={http://10.1109/83.403423},abstract={In this correspondence, we propose design techniques for analysis and synthesis filters of 2-D perfect reconstruction filter banks (PRFB\\'s) that perform optimal reconstruction when a reduced number of subband signals is used. Based on the minimization of the squared error between the original signal and some low-resolution representation of it, the 2-D filters are optimally adjusted to the statistics of the input images so that most of the signal\\'s energy is concentrated in the first few subband components. This property makes the optimal PRFB\\'s efficient for image compression and pattern representations at lower resolutions for classification purposes. By extending recently introduced ideas from frequency domain principal component analysis to two dimensions, we present results for general 2-D discrete nonstationary and stationary second-order processes, showing that the optimal filters are nonseparable. Particular attention is paid to separable random fields, proving that only the first and last filters of the optimal PRFB are separable in this case. Simulation results that illustrate the theoretical achievements are presented.}}

1994

 (J) Anastasios Delopoulos and Georgios B. Giannakis "Consistent identification of stochastic linear systems with noisy input-output data" Automatica, 30, (8), pp. 1271-1294, 1994 Aug [Abstract][BibTex][pdf]A novel criterion is introduced for parametric errors-in-variables identification of stochastic linear systems excited by non-Gaussian inputs. The new criterion is (at least theoretically) insensitive to a class of input-output disturbances because it implicitly involves higher- than second-order cumulant statistics. In addition, it is shown to be equivalent to the conventional Mean-Squared Error (MSE) as if the latter was computed in the ideal case of noise-free input-output data. The sampled version of the criterion converges to the novel MSE and guarantees strongly consistent parameter estimators. The asymptotic behavior of the resulting parameter estimators is analyzed and guidelines for minimum variance experiments are discussed briefly. Informative enough input signals and persistent of excitation conditions are specified. Computatonally attractive Recursive-Least-Squares variants are also developed for on-line implementation of ARMA modeling, and their potential is illustrated by applying them to time-delay estimation in low SNR environment. The performance of the proposed algorithms and comparisons with conventional methods are corroborated using simulated data.@article{Delopoulos1994Consistent,author={Anastasios Delopoulos and Georgios B. Giannakis},title={Consistent identification of stochastic linear systems with noisy input-output data},journal={Automatica},volume={30},number={8},pages={1271-1294},year={1994},month={08},date={1994-08-01},url={http://mug.ee.auth.gr/wp-content/uploads/1-s2.0-0005109894901082-main.pdf},doi={http://10.1016/0005-1098(94)90108-2},abstract={A novel criterion is introduced for parametric errors-in-variables identification of stochastic linear systems excited by non-Gaussian inputs. The new criterion is (at least theoretically) insensitive to a class of input-output disturbances because it implicitly involves higher- than second-order cumulant statistics. In addition, it is shown to be equivalent to the conventional Mean-Squared Error (MSE) as if the latter was computed in the ideal case of noise-free input-output data. The sampled version of the criterion converges to the novel MSE and guarantees strongly consistent parameter estimators. The asymptotic behavior of the resulting parameter estimators is analyzed and guidelines for minimum variance experiments are discussed briefly. Informative enough input signals and persistent of excitation conditions are specified. Computatonally attractive Recursive-Least-Squares variants are also developed for on-line implementation of ARMA modeling, and their potential is illustrated by applying them to time-delay estimation in low SNR environment. The performance of the proposed algorithms and comparisons with conventional methods are corroborated using simulated data.}} (J) Anastasios Delopoulos, A. Tirakis and Stephanos Kollias "Invariant image classification using triple-correlation-based neural networks" IEEE Transactions on Neural Networks, 5, (3), pp. 392-408, 1994 May [Abstract][BibTex][pdf]Triple-correlation-based neural networks are introduced and used in this paper for invariant classification of 2D gray scale images. Third-order correlations of an image are appropriately clustered, in spatial or spectral domain, to generate an equivalent image representation that is invariant with respect to translation, rotation, and dilation. An efficient implementation scheme is also proposed, which is robust to distortions, insensitive to additive noise, and classifies the original image using adequate neural network architectures applied directly to 2D image representations. Third-order neural networks are shown to be a specific category of triple-correlation-based networks, applied either to binary or gray-scale images. A simulation study is given, which illustrates the theoretical developments, using synthetic and real image data.@article{Delopoulos1994Invariant,author={Anastasios Delopoulos and A. Tirakis and Stephanos Kollias},title={Invariant image classification using triple-correlation-based neural networks},journal={IEEE Transactions on Neural Networks},volume={5},number={3},pages={392-408},year={1994},month={05},date={1994-05-01},url={http://dx.doi.org/10.1109/72.286911},doi={http://10.1109/72.286911},abstract={Triple-correlation-based neural networks are introduced and used in this paper for invariant classification of 2D gray scale images. Third-order correlations of an image are appropriately clustered, in spatial or spectral domain, to generate an equivalent image representation that is invariant with respect to translation, rotation, and dilation. An efficient implementation scheme is also proposed, which is robust to distortions, insensitive to additive noise, and classifies the original image using adequate neural network architectures applied directly to 2D image representations. Third-order neural networks are shown to be a specific category of triple-correlation-based networks, applied either to binary or gray-scale images. A simulation study is given, which illustrates the theoretical developments, using synthetic and real image data.}}

1992

 (J) Anastasios Delopoulos and Georgios B. Giannakis "Strongly consistent identification algorithms and noise insensitive MSE criteria" IEEE Transactions on Signal Processing, 40, (8), pp. 1955-1970, 1992 Aug [Abstract][BibTex][pdf]Windowed cumulant projections of nonGaussian linear processes yield autocorrelation estimators which are immune to additive Gaussian noise of unknown covariance. By establishing strong consistency of these estimators, strongly consistent and noise insensitive recursive algorithms are developed for parameter estimation. These computationally attractive schemes are shown to be optimal with respect to a modified mean-square-error (MSE) criterion which implicitly exploits the high signal-to-noise ratio domain of cumulant statistics. The novel MSE objective function is expressed in terms of the noisy process, but it is shown to be a scalar multiple of the standard MSE criterion as if the latter was computed in the absence of noise. Simulations illustrate the performance of the proposed algorithms and compare them with the conventional algorithms.@article{Delopoulos1992Strongly,author={Anastasios Delopoulos and Georgios B. Giannakis},title={Strongly consistent identification algorithms and noise insensitive MSE criteria},journal={IEEE Transactions on Signal Processing},volume={40},number={8},pages={1955-1970},year={1992},month={08},date={1992-08-01},url={http://dx.doi.org/10.1109/78.149997},doi={http://10.1109/78.149997},abstract={Windowed cumulant projections of nonGaussian linear processes yield autocorrelation estimators which are immune to additive Gaussian noise of unknown covariance. By establishing strong consistency of these estimators, strongly consistent and noise insensitive recursive algorithms are developed for parameter estimation. These computationally attractive schemes are shown to be optimal with respect to a modified mean-square-error (MSE) criterion which implicitly exploits the high signal-to-noise ratio domain of cumulant statistics. The novel MSE objective function is expressed in terms of the noisy process, but it is shown to be a scalar multiple of the standard MSE criterion as if the latter was computed in the absence of noise. Simulations illustrate the performance of the proposed algorithms and compare them with the conventional algorithms.}}

1990

 (J) Georgios Giannakis and Anastasios Delopoulos "Nonparametric estimation of autocorrelation and spectra using cumulants and polyspectra" Advanced Signal Processing Algorithms, Architectures, and Implementations, 503, 1990 Nov [Abstract][BibTex][pdf]Autocorrelation and specira of linear random processes can be expressed in terms of cumulants and polyspectra respectively. The insensitivity of the latter to additive Gaussian noise of unknown covariance is exploited in this paper to develop spectral estimators of deterministic and linear non-Gaussian signals using polyspectra. In the time-domain windowed projections of third-order cumulants are shown to yield consistent estimators of the autocorrelation sequence. Both batch and recursive algorithms are derived. In the frequency-domain a Fourier-slice solution and a least-squares approach are described for performing spectral analysis through windowed bi-periodograms. Asymptotic variance expressions of the time- and frequencydomain estimators are also presented. Two-dimensional extensions are indicated and potential applications are discussed. Simulations are provided to illustrate the performance of the proposed algorithms and compare them with conventional approaches.@article{Giannakis1990Nonparametric,author={Georgios Giannakis and Anastasios Delopoulos},title={Nonparametric estimation of autocorrelation and spectra using cumulants and polyspectra},journal={Advanced Signal Processing Algorithms, Architectures, and Implementations},volume={503},year={1990},month={11},date={1990-11-01},url={http://dx.doi.org/10.1117/12.23504},doi={http://10.1117/12.23504},abstract={Autocorrelation and specira of linear random processes can be expressed in terms of cumulants and polyspectra respectively. The insensitivity of the latter to additive Gaussian noise of unknown covariance is exploited in this paper to develop spectral estimators of deterministic and linear non-Gaussian signals using polyspectra. In the time-domain windowed projections of third-order cumulants are shown to yield consistent estimators of the autocorrelation sequence. Both batch and recursive algorithms are derived. In the frequency-domain a Fourier-slice solution and a least-squares approach are described for performing spectral analysis through windowed bi-periodograms. Asymptotic variance expressions of the time- and frequencydomain estimators are also presented. Two-dimensional extensions are indicated and potential applications are discussed. Simulations are provided to illustrate the performance of the proposed algorithms and compare them with conventional approaches.}}