Publications



2021

(J)
Alexandros Papadopoulos, Fotis Topouzis and Anastasios Delopoulos
Scientific Reports, 2021 Jul
[Abstract][BibTex][pdf]

Diabetic retinopathy (DR) is one of the leading causes of vision loss across the world. Yet despite its wide prevalence, the majority of affected people lack access to the specialized ophthalmologists and equipment required for monitoring their condition. This can lead to delays in the start of treatment, thereby lowering their chances for a successful outcome. Machine learning systems that automatically detect the disease in eye fundus images have been proposed as a means of facilitating access to retinopathy severity estimates for patients in remote regions or even for complementing the human expert’s diagnosis. Here we propose a machine learning system for the detection of referable diabetic retinopathy in fundus images, which is based on the paradigm of multiple-instance learning. Our method extracts local information independently from multiple rectangular image patches and combines it efficiently through an attention mechanism that focuses on the abnormal regions of the eye (i.e. those that contain DR-induced lesions), thus resulting in a final image representation that is suitable for classification. Furthermore, by leveraging the attention mechanism our algorithm can seamlessly produce informative heatmaps that highlight the regions where the lesions are located. We evaluate our approach on the publicly available Kaggle, Messidor-2 and IDRiD retinal image datasets, in which it exhibits near state-of-the-art classification performance (AUC of 0.961 in Kaggle and 0.976 in Messidor-2), while also producing valid lesion heatmaps (AUPRC of 0.869 in the 81 images of IDRiD that contain pixel-level lesion annotations). Our results suggest that the proposed approach provides an efficient and interpretable solution against the problem of automated diabetic retinopathy grading.

@article{alpapado2021,
author={Alexandros Papadopoulos and Fotis Topouzis and Anastasios Delopoulos},
title={An interpretable multiple-instance approach for the detection of referable diabetic retinopathy in fundus images},
journal={Scientific Reports},
year={2021},
month={07},
date={2021-07-12},
url={https://www.nature.com/articles/s41598-021-93632-8.pdf},
doi={https://doi.org/10.1038/s41598-021-93632-8},
abstract={Diabetic retinopathy (DR) is one of the leading causes of vision loss across the world. Yet despite its wide prevalence, the majority of affected people lack access to the specialized ophthalmologists and equipment required for monitoring their condition. This can lead to delays in the start of treatment, thereby lowering their chances for a successful outcome. Machine learning systems that automatically detect the disease in eye fundus images have been proposed as a means of facilitating access to retinopathy severity estimates for patients in remote regions or even for complementing the human expert’s diagnosis. Here we propose a machine learning system for the detection of referable diabetic retinopathy in fundus images, which is based on the paradigm of multiple-instance learning. Our method extracts local information independently from multiple rectangular image patches and combines it efficiently through an attention mechanism that focuses on the abnormal regions of the eye (i.e. those that contain DR-induced lesions), thus resulting in a final image representation that is suitable for classification. Furthermore, by leveraging the attention mechanism our algorithm can seamlessly produce informative heatmaps that highlight the regions where the lesions are located. We evaluate our approach on the publicly available Kaggle, Messidor-2 and IDRiD retinal image datasets, in which it exhibits near state-of-the-art classification performance (AUC of 0.961 in Kaggle and 0.976 in Messidor-2), while also producing valid lesion heatmaps (AUPRC of 0.869 in the 81 images of IDRiD that contain pixel-level lesion annotations). Our results suggest that the proposed approach provides an efficient and interpretable solution against the problem of automated diabetic retinopathy grading.}
}

2020

(J)
Alexandros Papadopoulos , Dimitrios Iakovakis, Lisa Klingelhoefer, Sevasti Bostantjopoulou, K. Ray Chaudhuri, Konstantinos Kyritsis, Stelios Hadjidimitriou, Vasileios Charisis , Leontios J. Hadjileontiadis and Anastasios Delopoulos
Scientific Reports, 2020 Dec
[Abstract][BibTex][pdf]

Parkinson’s Disease (PD) is the second most common neurodegenerative disorder, affecting more than 1% of the population above 60 years old with both motor and non-motor symptoms of escalating severity as it progresses. Since it cannot be cured, treatment options focus on the improvement of PD symptoms. In fact, evidence suggests that early PD intervention has the potential to slow down symptom progression and improve the general quality of life in the long term. However, the initial motor symptoms are usually very subtle and, as a result, patients seek medical assistance only when their condition has substantially deteriorated; thus, missing the opportunity for an improved clinical outcome. This situation highlights the need for accessible tools that can screen for early motor PD symptoms and alert individuals to act accordingly. Here we show that PD and its motor symptoms can unobtrusively be detected from the combination of accelerometer and touchscreen typing data that are passively captured during natural user-smartphone interaction. To this end, we introduce a deep learning framework that analyses such data to simultaneously predict tremor, fine-motor impairment and PD. In a validation dataset from 22 clinically-assessed subjects (8 Healthy Controls (HC)/14 PD patients with a total data contribution of 18.305 accelerometer and 2.922 typing sessions), the proposed approach achieved 0.86/0.93 sensitivity/specificity for the binary classification task of HC versus PD. Additional validation on data from 157 subjects (131 HC/26 PD with a total contribution of 76.528 accelerometer and 18.069 typing sessions) with self-reported health status (HC or PD), resulted in area under curve of 0.87, with sensitivity/specificity of 0.92/0.69 and 0.60/0.92 at the operating points of highest sensitivity or specificity, respectively. Our findings suggest that the proposed method can be used as a stepping stone towards the development of an accessible PD screening tool that will passively monitor the subject-smartphone interaction for signs of PD and which could be used to reduce the critical gap between disease onset and start of treatment.

@article{alpapado2020,
author={Alexandros Papadopoulos and Dimitrios Iakovakis and Lisa Klingelhoefer and Sevasti Bostantjopoulou and K. Ray Chaudhuri and Konstantinos Kyritsis and Stelios Hadjidimitriou and Vasileios Charisis and Leontios J. Hadjileontiadis and Anastasios Delopoulos},
title={Unobtrusive detection of Parkinson’s disease from multi?modal and in?the?wild sensor data using deep learning techniques},
journal={Scientific Reports},
year={2020},
month={12},
date={2020-12-07},
url={https://www.nature.com/articles/s41598-020-78418-8},
doi={https://doi.org/10.1038/s41598-020-78418-8},
abstract={Parkinson’s Disease (PD) is the second most common neurodegenerative disorder, affecting more than 1% of the population above 60 years old with both motor and non-motor symptoms of escalating severity as it progresses. Since it cannot be cured, treatment options focus on the improvement of PD symptoms. In fact, evidence suggests that early PD intervention has the potential to slow down symptom progression and improve the general quality of life in the long term. However, the initial motor symptoms are usually very subtle and, as a result, patients seek medical assistance only when their condition has substantially deteriorated; thus, missing the opportunity for an improved clinical outcome. This situation highlights the need for accessible tools that can screen for early motor PD symptoms and alert individuals to act accordingly. Here we show that PD and its motor symptoms can unobtrusively be detected from the combination of accelerometer and touchscreen typing data that are passively captured during natural user-smartphone interaction. To this end, we introduce a deep learning framework that analyses such data to simultaneously predict tremor, fine-motor impairment and PD. In a validation dataset from 22 clinically-assessed subjects (8 Healthy Controls (HC)/14 PD patients with a total data contribution of 18.305 accelerometer and 2.922 typing sessions), the proposed approach achieved 0.86/0.93 sensitivity/specificity for the binary classification task of HC versus PD. Additional validation on data from 157 subjects (131 HC/26 PD with a total contribution of 76.528 accelerometer and 18.069 typing sessions) with self-reported health status (HC or PD), resulted in area under curve of 0.87, with sensitivity/specificity of 0.92/0.69 and 0.60/0.92 at the operating points of highest sensitivity or specificity, respectively. Our findings suggest that the proposed method can be used as a stepping stone towards the development of an accessible PD screening tool that will passively monitor the subject-smartphone interaction for signs of PD and which could be used to reduce the critical gap between disease onset and start of treatment.}
}

2019

(J)
Alexandros Papadopoulos, Konstantinos Kyritsis, Lisa Klingelhoefer, Sevasti Bostanjopoulou, K. Ray Chaudhuri and Anastasios Delopoulos
IEEE Journal of Biomedical and Health Informatics, 2019 Dec
[Abstract][BibTex][pdf]

Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old, causing symptoms that are subtle at first, but whose intensity increases as the disease progresses. Automated detection of these symptoms could offer clues as to the early onset of the disease, thus improving the expected clinical outcomes of the patients via appropriately targeted interventions. This potential has led many researchers to develop methods that use widely available sensors to measure and quantify the presence of PD symptoms such as tremor, rigidity and braykinesia. However, most of these approaches operate under controlled settings, such as in lab or at home, thus limiting their applicability under free-living conditions. In this work, we present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device. We propose a Multiple-Instance Learning approach, wherein a subject is represented as an unordered bag of accelerometer signal segments and a single, expert-provided, tremor annotation. Our method combines deep feature learning with a learnable pooling stage that is able to identify key instances within the subject bag, while still being trainable end-to-end. We validate our algo- rithm on a newly introduced dataset of 45 subjects, containing accelerometer signals collected entirely in-the-wild. The good classification performance obtained in the conducted experiments suggests that the proposed method can efficiently navigate the noisy environment of in-the-wild recordings.

@article{alpapado2019detecting,
author={Alexandros Papadopoulos and Konstantinos Kyritsis and Lisa Klingelhoefer and Sevasti Bostanjopoulou and K. Ray Chaudhuri and Anastasios Delopoulos},
title={Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using Deep Multiple-Instance Learning},
journal={IEEE Journal of Biomedical and Health Informatics},
year={2019},
month={12},
date={2019-12-24},
url={http://mug.ee.auth.gr/wp-content/uploads/alpapado2019detecting.pdf},
doi={http://10.1109/JBHI.2019.2961748},
abstract={Parkinson\'s Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old, causing symptoms that are subtle at first, but whose intensity increases as the disease progresses. Automated detection of these symptoms could offer clues as to the early onset of the disease, thus improving the expected clinical outcomes of the patients via appropriately targeted interventions. This potential has led many researchers to develop methods that use widely available sensors to measure and quantify the presence of PD symptoms such as tremor, rigidity and braykinesia. However, most of these approaches operate under controlled settings, such as in lab or at home, thus limiting their applicability under free-living conditions. In this work, we present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device. We propose a Multiple-Instance Learning approach, wherein a subject is represented as an unordered bag of accelerometer signal segments and a single, expert-provided, tremor annotation. Our method combines deep feature learning with a learnable pooling stage that is able to identify key instances within the subject bag, while still being trainable end-to-end. We validate our algo- rithm on a newly introduced dataset of 45 subjects, containing accelerometer signals collected entirely in-the-wild. The good classification performance obtained in the conducted experiments suggests that the proposed method can efficiently navigate the noisy environment of in-the-wild recordings.}
}

2019

(C)
A. Papadopoulos, K. Kyritsis, S. Bostanjopoulou, L. Klingelhoefer, R. K. Chaudhuri and A. Delopoulos
2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019 Jul
[Abstract][BibTex][pdf]

Parkinson’s Disease (PD) is a neurodegenerative disorder that manifests through slowly progressing symptoms, such as tremor, voice degradation and bradykinesia. Automated detection of such symptoms has recently received much attention by the research community, owing to the clinical benefits associated with the early diagnosis of the disease. Unfortunately, most of the approaches proposed so far, operate under a strictly laboratory setting, thus limiting their potential applicability in real world conditions. In this work, we present a method for automatically detecting tremorous episodes related to PD, based on acceleration signals. We propose to address the problem at hand, as a case of Multiple-Instance Learning, wherein a subject is represented as an unordered bag of signal segments and a single, expert-provided, ground-truth. We employ a deep learning approach that combines feature learning and a learnable pooling stage and is trainable end-to-end. Results on a newly introduced dataset of accelerometer signals collected in-the-wild confirm the validity of the proposed approach.

@conference{alpapado2019embc,
author={A. Papadopoulos and K. Kyritsis and S. Bostanjopoulou and L. Klingelhoefer and R. K. Chaudhuri and A. Delopoulos},
title={Multiple-Instance Learning for In-The-Wild Parkinsonian Tremor Detection},
booktitle={2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
year={2019},
month={07},
date={2019-07-23},
url={http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8856314&isnumber=8856280},
doi={https://doi.org/10.1109/EMBC.2019.8856314},
abstract={Parkinson’s Disease (PD) is a neurodegenerative disorder that manifests through slowly progressing symptoms, such as tremor, voice degradation and bradykinesia. Automated detection of such symptoms has recently received much attention by the research community, owing to the clinical benefits associated with the early diagnosis of the disease. Unfortunately, most of the approaches proposed so far, operate under a strictly laboratory setting, thus limiting their potential applicability in real world conditions. In this work, we present a method for automatically detecting tremorous episodes related to PD, based on acceleration signals. We propose to address the problem at hand, as a case of Multiple-Instance Learning, wherein a subject is represented as an unordered bag of signal segments and a single, expert-provided, ground-truth. We employ a deep learning approach that combines feature learning and a learnable pooling stage and is trainable end-to-end. Results on a newly introduced dataset of accelerometer signals collected in-the-wild confirm the validity of the proposed approach.}
}

2018

(C)
Alexandros Papadopoulos, Konstantinos Kyritsis, Ioannis Sarafis and Anastasios Delopoulos
40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, Honolulu, HI, USA, 2018 Oct
[Abstract][BibTex][pdf]

Automated monitoring and analysis of eating behaviour patterns, i.e., “how one eats”, has recently received much attention by the research community, owing to the association of eating patterns with health-related problems and especially obesity and its comorbidities. In this work, we introduce an improved method for meal micro-structure analysis. Stepping on a previous methodology of ours that combines feature extraction, SVM micro-movement classification and LSTM sequence modelling, we propose a method to adapt a pretrained IMU-based food intake cycle detection model to a new subject, with the purpose of improving model performance for that subject. We split model training into two stages. First, the model is trained using standard supervised learning techniques. Then, an adaptation step is performed, where the model is fine-tuned on unlabeled samples of the target subject via semisupervised learning. Evaluation is performed on a publicly available dataset that was originally created and used in [1] and has been extended here to demonstrate the effect of the semisupervised approach, where the proposed method improves over the baseline method.

@conference{papadopoulos2018personalised,
author={Alexandros Papadopoulos and Konstantinos Kyritsis and Ioannis Sarafis and Anastasios Delopoulos},
title={Personalised meal eating behaviour analysis via semi-supervised learning},
booktitle={40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
publisher={IEEE},
address={Honolulu, HI, USA},
year={2018},
month={10},
date={2018-10-29},
url={http://mug.ee.auth.gr/wp-content/uploads/papadopoulos2018personalised.pdf},
doi={http://10.1109/EMBC.2018.8513174},
abstract={Automated monitoring and analysis of eating behaviour patterns, i.e., “how one eats”, has recently received much attention by the research community, owing to the association of eating patterns with health-related problems and especially obesity and its comorbidities. In this work, we introduce an improved method for meal micro-structure analysis. Stepping on a previous methodology of ours that combines feature extraction, SVM micro-movement classification and LSTM sequence modelling, we propose a method to adapt a pretrained IMU-based food intake cycle detection model to a new subject, with the purpose of improving model performance for that subject. We split model training into two stages. First, the model is trained using standard supervised learning techniques. Then, an adaptation step is performed, where the model is fine-tuned on unlabeled samples of the target subject via semisupervised learning. Evaluation is performed on a publicly available dataset that was originally created and used in [1] and has been extended here to demonstrate the effect of the semisupervised approach, where the proposed method improves over the baseline method.}
}